2023-05-24 16:52:41,700 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db 2023-05-24 16:52:41,713 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-24 16:52:41,754 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=298, ProcessCount=176, AvailableMemoryMB=11720 2023-05-24 16:52:41,763 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:52:41,764 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2, deleteOnExit=true 2023-05-24 16:52:41,764 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:52:41,765 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/test.cache.data in system properties and HBase conf 2023-05-24 16:52:41,765 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:52:41,766 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:52:41,766 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:52:41,767 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:52:41,767 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:52:41,886 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-24 16:52:42,285 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:52:42,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:52:42,291 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:52:42,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:52:42,292 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:52:42,293 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:52:42,293 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:52:42,294 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:52:42,294 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:52:42,294 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:52:42,295 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:52:42,295 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:52:42,296 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:52:42,296 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:52:42,297 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:52:42,822 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:52:42,834 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:52:42,839 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:52:43,125 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-24 16:52:43,282 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-24 16:52:43,301 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:52:43,339 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:52:43,370 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/java.io.tmpdir/Jetty_localhost_localdomain_39565_hdfs____.y8j3bs/webapp 2023-05-24 16:52:43,581 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:39565 2023-05-24 16:52:43,591 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:52:43,593 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:52:43,594 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:52:44,117 WARN [Listener at localhost.localdomain/42025] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:52:44,201 WARN [Listener at localhost.localdomain/42025] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:52:44,220 WARN [Listener at localhost.localdomain/42025] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:52:44,226 INFO [Listener at localhost.localdomain/42025] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:52:44,230 INFO [Listener at localhost.localdomain/42025] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/java.io.tmpdir/Jetty_localhost_41567_datanode____.pt11ts/webapp 2023-05-24 16:52:44,309 INFO [Listener at localhost.localdomain/42025] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41567 2023-05-24 16:52:44,576 WARN [Listener at localhost.localdomain/42529] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:52:44,586 WARN [Listener at localhost.localdomain/42529] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:52:44,590 WARN [Listener at localhost.localdomain/42529] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:52:44,592 INFO [Listener at localhost.localdomain/42529] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:52:44,598 INFO [Listener at localhost.localdomain/42529] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/java.io.tmpdir/Jetty_localhost_44303_datanode____.vfawpv/webapp 2023-05-24 16:52:44,675 INFO [Listener at localhost.localdomain/42529] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44303 2023-05-24 16:52:44,684 WARN [Listener at localhost.localdomain/41887] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:52:44,984 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x214ba9e7a0cd4056: Processing first storage report for DS-82003e4a-47c2-4891-83de-ef6128a53f06 from datanode 20d9622d-ae3f-46b2-b827-a144967cb573 2023-05-24 16:52:44,985 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x214ba9e7a0cd4056: from storage DS-82003e4a-47c2-4891-83de-ef6128a53f06 node DatanodeRegistration(127.0.0.1:38617, datanodeUuid=20d9622d-ae3f-46b2-b827-a144967cb573, infoPort=35747, infoSecurePort=0, ipcPort=42529, storageInfo=lv=-57;cid=testClusterID;nsid=258721376;c=1684947162915), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-24 16:52:44,985 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f9ff8592d3283d2: Processing first storage report for DS-cac45434-4c5e-469b-875c-2e8936d24e6c from datanode ba61a3c9-5156-47e8-b3d8-2b38eb51dcbe 2023-05-24 16:52:44,986 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f9ff8592d3283d2: from storage DS-cac45434-4c5e-469b-875c-2e8936d24e6c node DatanodeRegistration(127.0.0.1:35029, datanodeUuid=ba61a3c9-5156-47e8-b3d8-2b38eb51dcbe, infoPort=40893, infoSecurePort=0, ipcPort=41887, storageInfo=lv=-57;cid=testClusterID;nsid=258721376;c=1684947162915), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:52:44,986 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x214ba9e7a0cd4056: Processing first storage report for DS-a18618fa-a7b0-4a3b-be41-00c7783aecfe from datanode 20d9622d-ae3f-46b2-b827-a144967cb573 2023-05-24 16:52:44,986 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x214ba9e7a0cd4056: from storage DS-a18618fa-a7b0-4a3b-be41-00c7783aecfe node DatanodeRegistration(127.0.0.1:38617, datanodeUuid=20d9622d-ae3f-46b2-b827-a144967cb573, infoPort=35747, infoSecurePort=0, ipcPort=42529, storageInfo=lv=-57;cid=testClusterID;nsid=258721376;c=1684947162915), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:52:44,986 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f9ff8592d3283d2: Processing first storage report for DS-9b1f8828-684c-4502-9f6b-d13d664821de from datanode ba61a3c9-5156-47e8-b3d8-2b38eb51dcbe 2023-05-24 16:52:44,986 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f9ff8592d3283d2: from storage DS-9b1f8828-684c-4502-9f6b-d13d664821de node DatanodeRegistration(127.0.0.1:35029, datanodeUuid=ba61a3c9-5156-47e8-b3d8-2b38eb51dcbe, infoPort=40893, infoSecurePort=0, ipcPort=41887, storageInfo=lv=-57;cid=testClusterID;nsid=258721376;c=1684947162915), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 16:52:45,040 DEBUG [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db 2023-05-24 16:52:45,091 INFO [Listener at localhost.localdomain/41887] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/zookeeper_0, clientPort=62237, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:52:45,103 INFO [Listener at localhost.localdomain/41887] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62237 2023-05-24 16:52:45,110 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:45,112 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:45,739 INFO [Listener at localhost.localdomain/41887] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db with version=8 2023-05-24 16:52:45,739 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:52:45,987 INFO [Listener at localhost.localdomain/41887] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-24 16:52:46,346 INFO [Listener at localhost.localdomain/41887] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:52:46,371 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,371 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,371 INFO [Listener at localhost.localdomain/41887] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:52:46,371 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,371 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:52:46,483 INFO [Listener at localhost.localdomain/41887] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:52:46,544 DEBUG [Listener at localhost.localdomain/41887] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-24 16:52:46,619 INFO [Listener at localhost.localdomain/41887] ipc.NettyRpcServer(120): Bind to /148.251.75.209:42189 2023-05-24 16:52:46,627 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:46,629 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:46,645 INFO [Listener at localhost.localdomain/41887] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42189 connecting to ZooKeeper ensemble=127.0.0.1:62237 2023-05-24 16:52:46,861 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:421890x0, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:52:46,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42189-0x1017e6377c20000 connected 2023-05-24 16:52:46,884 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:52:46,885 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:52:46,889 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:52:46,898 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42189 2023-05-24 16:52:46,899 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42189 2023-05-24 16:52:46,899 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42189 2023-05-24 16:52:46,900 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42189 2023-05-24 16:52:46,900 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42189 2023-05-24 16:52:46,905 INFO [Listener at localhost.localdomain/41887] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db, hbase.cluster.distributed=false 2023-05-24 16:52:46,963 INFO [Listener at localhost.localdomain/41887] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:52:46,963 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,963 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,963 INFO [Listener at localhost.localdomain/41887] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:52:46,963 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:52:46,964 INFO [Listener at localhost.localdomain/41887] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:52:46,967 INFO [Listener at localhost.localdomain/41887] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:52:46,970 INFO [Listener at localhost.localdomain/41887] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38189 2023-05-24 16:52:46,972 INFO [Listener at localhost.localdomain/41887] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:52:46,977 DEBUG [Listener at localhost.localdomain/41887] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:52:46,978 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:46,979 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:46,981 INFO [Listener at localhost.localdomain/41887] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38189 connecting to ZooKeeper ensemble=127.0.0.1:62237 2023-05-24 16:52:46,985 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:381890x0, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:52:46,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38189-0x1017e6377c20001 connected 2023-05-24 16:52:46,986 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:52:46,987 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:52:46,988 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ZKUtil(164): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:52:46,989 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38189 2023-05-24 16:52:46,989 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38189 2023-05-24 16:52:46,990 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38189 2023-05-24 16:52:46,990 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38189 2023-05-24 16:52:46,991 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38189 2023-05-24 16:52:46,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,009 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:52:47,011 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,032 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:52:47,032 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:52:47,032 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,033 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:52:47,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,42189,1684947165857 from backup master directory 2023-05-24 16:52:47,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:52:47,042 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,042 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:52:47,043 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:52:47,043 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,047 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-24 16:52:47,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-24 16:52:47,129 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase.id with ID: 1563296e-8c54-46cd-b55f-b98be530f5a2 2023-05-24 16:52:47,181 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:47,195 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,241 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7038e785 to 127.0.0.1:62237 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:52:47,268 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7824340d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:52:47,288 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:52:47,290 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:52:47,297 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:52:47,327 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store-tmp 2023-05-24 16:52:47,356 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:47,356 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:52:47,356 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:52:47,356 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:52:47,357 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:52:47,357 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:52:47,357 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:52:47,357 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:52:47,359 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/WALs/jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,378 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42189%2C1684947165857, suffix=, logDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/WALs/jenkins-hbase20.apache.org,42189,1684947165857, archiveDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/oldWALs, maxLogs=10 2023-05-24 16:52:47,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:52:47,417 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/WALs/jenkins-hbase20.apache.org,42189,1684947165857/jenkins-hbase20.apache.org%2C42189%2C1684947165857.1684947167392 2023-05-24 16:52:47,417 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:52:47,418 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:52:47,418 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:47,421 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,422 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,472 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,480 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:52:47,505 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:52:47,517 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:47,524 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,542 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:52:47,547 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:52:47,548 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=770220, jitterRate=-0.020615682005882263}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:52:47,548 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:52:47,550 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:52:47,566 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:52:47,567 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:52:47,569 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:52:47,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-24 16:52:47,600 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 28 msec 2023-05-24 16:52:47,600 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:52:47,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:52:47,628 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:52:47,651 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:52:47,654 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:52:47,657 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:52:47,661 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:52:47,664 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:52:47,666 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:52:47,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:52:47,679 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:52:47,683 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:52:47,683 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:52:47,683 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,683 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,42189,1684947165857, sessionid=0x1017e6377c20000, setting cluster-up flag (Was=false) 2023-05-24 16:52:47,697 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,701 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:52:47,703 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,707 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:47,710 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:52:47,712 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:47,714 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.hbase-snapshot/.tmp 2023-05-24 16:52:47,794 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(951): ClusterId : 1563296e-8c54-46cd-b55f-b98be530f5a2 2023-05-24 16:52:47,799 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:52:47,804 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:52:47,804 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:52:47,807 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:52:47,808 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ReadOnlyZKClient(139): Connect 0x4ac087cc to 127.0.0.1:62237 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:52:47,815 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:52:47,815 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@440760e4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:52:47,816 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4038ecbc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:52:47,824 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:52:47,824 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:52:47,824 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:52:47,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:52:47,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:52:47,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:47,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:52:47,825 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:47,828 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947197828 2023-05-24 16:52:47,831 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:52:47,836 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:52:47,837 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:52:47,843 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:38189 2023-05-24 16:52:47,843 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:52:47,845 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:52:47,848 INFO [RS:0;jenkins-hbase20:38189] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:52:47,849 INFO [RS:0;jenkins-hbase20:38189] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:52:47,849 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:52:47,851 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,42189,1684947165857 with isa=jenkins-hbase20.apache.org/148.251.75.209:38189, startcode=1684947166962 2023-05-24 16:52:47,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:52:47,854 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:52:47,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:52:47,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:52:47,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:47,859 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:52:47,861 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:52:47,862 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:52:47,866 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:52:47,867 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:52:47,870 DEBUG [RS:0;jenkins-hbase20:38189] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:52:47,878 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947167869,5,FailOnTimeoutGroup] 2023-05-24 16:52:47,880 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947167880,5,FailOnTimeoutGroup] 2023-05-24 16:52:47,882 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:47,883 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:52:47,887 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:47,888 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:47,894 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:52:47,895 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:52:47,895 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db 2023-05-24 16:52:47,915 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:47,918 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:52:47,922 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/info 2023-05-24 16:52:47,923 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:52:47,924 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:47,925 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:52:47,929 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:52:47,930 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:52:47,931 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:47,932 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:52:47,935 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/table 2023-05-24 16:52:47,936 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:52:47,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:47,939 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740 2023-05-24 16:52:47,940 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740 2023-05-24 16:52:47,945 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:52:47,948 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:52:47,952 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:52:47,953 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=846009, jitterRate=0.07575665414333344}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:52:47,953 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:52:47,953 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:52:47,954 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:52:47,954 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:52:47,954 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:52:47,954 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:52:47,955 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:52:47,955 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:52:47,959 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:52:47,960 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:52:47,966 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:52:47,978 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:52:47,981 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:52:47,999 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47071, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:52:48,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,027 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db 2023-05-24 16:52:48,027 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42025 2023-05-24 16:52:48,027 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:52:48,031 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:52:48,032 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ZKUtil(162): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,032 WARN [RS:0;jenkins-hbase20:38189] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:52:48,033 INFO [RS:0;jenkins-hbase20:38189] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:52:48,033 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,035 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38189,1684947166962] 2023-05-24 16:52:48,042 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ZKUtil(162): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,051 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:52:48,059 INFO [RS:0;jenkins-hbase20:38189] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:52:48,078 INFO [RS:0;jenkins-hbase20:38189] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:52:48,082 INFO [RS:0;jenkins-hbase20:38189] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:52:48,082 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,083 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:52:48,088 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,089 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:52:48,090 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,090 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,090 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,090 DEBUG [RS:0;jenkins-hbase20:38189] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:52:48,091 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,091 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,091 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,105 INFO [RS:0;jenkins-hbase20:38189] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:52:48,107 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38189,1684947166962-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,120 INFO [RS:0;jenkins-hbase20:38189] regionserver.Replication(203): jenkins-hbase20.apache.org,38189,1684947166962 started 2023-05-24 16:52:48,120 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38189,1684947166962, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38189, sessionid=0x1017e6377c20001 2023-05-24 16:52:48,121 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:52:48,121 DEBUG [RS:0;jenkins-hbase20:38189] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,121 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38189,1684947166962' 2023-05-24 16:52:48,121 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:52:48,121 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:52:48,122 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:52:48,122 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:52:48,122 DEBUG [RS:0;jenkins-hbase20:38189] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,122 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38189,1684947166962' 2023-05-24 16:52:48,122 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:52:48,123 DEBUG [RS:0;jenkins-hbase20:38189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:52:48,123 DEBUG [RS:0;jenkins-hbase20:38189] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:52:48,123 INFO [RS:0;jenkins-hbase20:38189] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:52:48,123 INFO [RS:0;jenkins-hbase20:38189] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:52:48,133 DEBUG [jenkins-hbase20:42189] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:52:48,136 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38189,1684947166962, state=OPENING 2023-05-24 16:52:48,145 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:52:48,146 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:48,147 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:52:48,151 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38189,1684947166962}] 2023-05-24 16:52:48,237 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38189%2C1684947166962, suffix=, logDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962, archiveDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/oldWALs, maxLogs=32 2023-05-24 16:52:48,252 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947168241 2023-05-24 16:52:48,252 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:52:48,334 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,337 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:52:48,340 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:52:48,351 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:52:48,352 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:52:48,355 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38189%2C1684947166962.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962, archiveDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/oldWALs, maxLogs=32 2023-05-24 16:52:48,370 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.meta.1684947168357.meta 2023-05-24 16:52:48,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:52:48,371 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:52:48,374 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:52:48,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:52:48,393 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:52:48,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:52:48,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:48,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:52:48,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:52:48,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:52:48,404 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/info 2023-05-24 16:52:48,404 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/info 2023-05-24 16:52:48,405 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:52:48,405 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:48,406 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:52:48,407 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:52:48,407 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:52:48,408 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:52:48,409 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:48,409 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:52:48,411 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/table 2023-05-24 16:52:48,411 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/table 2023-05-24 16:52:48,411 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:52:48,412 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:48,415 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740 2023-05-24 16:52:48,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740 2023-05-24 16:52:48,421 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:52:48,424 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:52:48,425 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=695759, jitterRate=-0.11529676616191864}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:52:48,425 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:52:48,434 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947168327 2023-05-24 16:52:48,451 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:52:48,452 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:52:48,452 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,38189,1684947166962, state=OPEN 2023-05-24 16:52:48,454 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:52:48,454 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:52:48,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:52:48,461 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,38189,1684947166962 in 303 msec 2023-05-24 16:52:48,468 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:52:48,468 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 495 msec 2023-05-24 16:52:48,474 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 713 msec 2023-05-24 16:52:48,475 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947168475, completionTime=-1 2023-05-24 16:52:48,475 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:52:48,476 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:52:48,533 DEBUG [hconnection-0x32d2d5b2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:52:48,536 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43106, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:52:48,550 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:52:48,550 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947228550 2023-05-24 16:52:48,550 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947288550 2023-05-24 16:52:48,551 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 74 msec 2023-05-24 16:52:48,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42189,1684947165857-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42189,1684947165857-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42189,1684947165857-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,580 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:42189, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,580 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:52:48,587 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:52:48,594 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:52:48,595 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:52:48,604 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:52:48,606 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:52:48,609 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:52:48,630 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/hbase/namespace/c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:48,633 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/hbase/namespace/c6db801c5215af631716fed6aab54d35 empty. 2023-05-24 16:52:48,634 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/hbase/namespace/c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:48,634 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:52:48,686 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:52:48,689 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c6db801c5215af631716fed6aab54d35, NAME => 'hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp 2023-05-24 16:52:48,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:48,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c6db801c5215af631716fed6aab54d35, disabling compactions & flushes 2023-05-24 16:52:48,711 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:48,712 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:48,712 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. after waiting 0 ms 2023-05-24 16:52:48,712 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:48,712 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:48,712 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c6db801c5215af631716fed6aab54d35: 2023-05-24 16:52:48,717 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:52:48,729 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947168720"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947168720"}]},"ts":"1684947168720"} 2023-05-24 16:52:48,752 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:52:48,754 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:52:48,758 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947168754"}]},"ts":"1684947168754"} 2023-05-24 16:52:48,762 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:52:48,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c6db801c5215af631716fed6aab54d35, ASSIGN}] 2023-05-24 16:52:48,774 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c6db801c5215af631716fed6aab54d35, ASSIGN 2023-05-24 16:52:48,776 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c6db801c5215af631716fed6aab54d35, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38189,1684947166962; forceNewPlan=false, retain=false 2023-05-24 16:52:48,927 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c6db801c5215af631716fed6aab54d35, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:48,928 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947168927"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947168927"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947168927"}]},"ts":"1684947168927"} 2023-05-24 16:52:48,934 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c6db801c5215af631716fed6aab54d35, server=jenkins-hbase20.apache.org,38189,1684947166962}] 2023-05-24 16:52:49,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:49,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c6db801c5215af631716fed6aab54d35, NAME => 'hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:52:49,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:49,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,107 INFO [StoreOpener-c6db801c5215af631716fed6aab54d35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,109 DEBUG [StoreOpener-c6db801c5215af631716fed6aab54d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/info 2023-05-24 16:52:49,109 DEBUG [StoreOpener-c6db801c5215af631716fed6aab54d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/info 2023-05-24 16:52:49,109 INFO [StoreOpener-c6db801c5215af631716fed6aab54d35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c6db801c5215af631716fed6aab54d35 columnFamilyName info 2023-05-24 16:52:49,110 INFO [StoreOpener-c6db801c5215af631716fed6aab54d35-1] regionserver.HStore(310): Store=c6db801c5215af631716fed6aab54d35/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:49,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,113 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,118 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c6db801c5215af631716fed6aab54d35 2023-05-24 16:52:49,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:52:49,123 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c6db801c5215af631716fed6aab54d35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=782281, jitterRate=-0.005279228091239929}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:52:49,123 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c6db801c5215af631716fed6aab54d35: 2023-05-24 16:52:49,127 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35., pid=6, masterSystemTime=1684947169090 2023-05-24 16:52:49,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:49,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:52:49,138 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c6db801c5215af631716fed6aab54d35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:49,138 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947169137"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947169137"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947169137"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947169137"}]},"ts":"1684947169137"} 2023-05-24 16:52:49,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:52:49,149 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c6db801c5215af631716fed6aab54d35, server=jenkins-hbase20.apache.org,38189,1684947166962 in 210 msec 2023-05-24 16:52:49,153 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:52:49,153 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c6db801c5215af631716fed6aab54d35, ASSIGN in 379 msec 2023-05-24 16:52:49,155 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:52:49,156 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947169155"}]},"ts":"1684947169155"} 2023-05-24 16:52:49,160 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:52:49,164 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:52:49,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 568 msec 2023-05-24 16:52:49,208 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:52:49,210 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:52:49,210 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:49,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:52:49,275 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:52:49,280 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-05-24 16:52:49,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:52:49,304 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:52:49,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-05-24 16:52:49,328 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:52:49,331 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:52:49,331 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.288sec 2023-05-24 16:52:49,335 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:52:49,337 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:52:49,337 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:52:49,339 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42189,1684947165857-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:52:49,341 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42189,1684947165857-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:52:49,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:52:49,406 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ReadOnlyZKClient(139): Connect 0x2074ac41 to 127.0.0.1:62237 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:52:49,415 DEBUG [Listener at localhost.localdomain/41887] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11eab69b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:52:49,429 DEBUG [hconnection-0x67a60e1d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:52:49,441 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43118, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:52:49,454 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:52:49,455 INFO [Listener at localhost.localdomain/41887] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:52:49,462 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:52:49,462 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:52:49,463 INFO [Listener at localhost.localdomain/41887] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:52:49,471 DEBUG [Listener at localhost.localdomain/41887] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 16:52:49,476 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58686, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 16:52:49,485 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 16:52:49,485 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 16:52:49,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:52:49,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-24 16:52:49,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:52:49,494 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:52:49,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-24 16:52:49,499 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,501 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698 empty. 2023-05-24 16:52:49,503 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,503 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-24 16:52:49,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:52:49,530 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 16:52:49,532 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 93b8c297a22e1f8cb31d38047fc60698, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/.tmp 2023-05-24 16:52:49,551 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:49,551 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 93b8c297a22e1f8cb31d38047fc60698, disabling compactions & flushes 2023-05-24 16:52:49,551 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,551 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,552 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. after waiting 0 ms 2023-05-24 16:52:49,552 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,552 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,552 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:52:49,556 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:52:49,558 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684947169558"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947169558"}]},"ts":"1684947169558"} 2023-05-24 16:52:49,561 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:52:49,562 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:52:49,563 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947169562"}]},"ts":"1684947169562"} 2023-05-24 16:52:49,565 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-24 16:52:49,568 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=93b8c297a22e1f8cb31d38047fc60698, ASSIGN}] 2023-05-24 16:52:49,571 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=93b8c297a22e1f8cb31d38047fc60698, ASSIGN 2023-05-24 16:52:49,573 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=93b8c297a22e1f8cb31d38047fc60698, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38189,1684947166962; forceNewPlan=false, retain=false 2023-05-24 16:52:49,725 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=93b8c297a22e1f8cb31d38047fc60698, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:49,726 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684947169725"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947169725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947169725"}]},"ts":"1684947169725"} 2023-05-24 16:52:49,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 93b8c297a22e1f8cb31d38047fc60698, server=jenkins-hbase20.apache.org,38189,1684947166962}] 2023-05-24 16:52:49,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 93b8c297a22e1f8cb31d38047fc60698, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:52:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:52:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,905 INFO [StoreOpener-93b8c297a22e1f8cb31d38047fc60698-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,908 DEBUG [StoreOpener-93b8c297a22e1f8cb31d38047fc60698-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info 2023-05-24 16:52:49,908 DEBUG [StoreOpener-93b8c297a22e1f8cb31d38047fc60698-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info 2023-05-24 16:52:49,909 INFO [StoreOpener-93b8c297a22e1f8cb31d38047fc60698-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 93b8c297a22e1f8cb31d38047fc60698 columnFamilyName info 2023-05-24 16:52:49,910 INFO [StoreOpener-93b8c297a22e1f8cb31d38047fc60698-1] regionserver.HStore(310): Store=93b8c297a22e1f8cb31d38047fc60698/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:52:49,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:52:49,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:52:49,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 93b8c297a22e1f8cb31d38047fc60698; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=708222, jitterRate=-0.0994502454996109}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:52:49,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:52:49,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698., pid=11, masterSystemTime=1684947169890 2023-05-24 16:52:49,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:52:49,927 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=93b8c297a22e1f8cb31d38047fc60698, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:52:49,927 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684947169927"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947169927"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947169927"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947169927"}]},"ts":"1684947169927"} 2023-05-24 16:52:49,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 16:52:49,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 93b8c297a22e1f8cb31d38047fc60698, server=jenkins-hbase20.apache.org,38189,1684947166962 in 195 msec 2023-05-24 16:52:49,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 16:52:49,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=93b8c297a22e1f8cb31d38047fc60698, ASSIGN in 366 msec 2023-05-24 16:52:49,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:52:49,940 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947169939"}]},"ts":"1684947169939"} 2023-05-24 16:52:49,942 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-24 16:52:49,945 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:52:49,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 457 msec 2023-05-24 16:52:53,920 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-24 16:52:54,057 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 16:52:54,060 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 16:52:54,062 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-24 16:52:55,984 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 16:52:55,986 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-24 16:52:59,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42189] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:52:59,527 INFO [Listener at localhost.localdomain/41887] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-24 16:52:59,533 DEBUG [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-24 16:52:59,536 DEBUG [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:53:11,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38189] regionserver.HRegion(9158): Flush requested on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:53:11,583 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 93b8c297a22e1f8cb31d38047fc60698 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:53:11,679 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/c19e797a3a75471a821688691346fb01 2023-05-24 16:53:11,720 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/c19e797a3a75471a821688691346fb01 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01 2023-05-24 16:53:11,730 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01, entries=7, sequenceid=11, filesize=12.1 K 2023-05-24 16:53:11,733 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 93b8c297a22e1f8cb31d38047fc60698 in 150ms, sequenceid=11, compaction requested=false 2023-05-24 16:53:11,734 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:53:19,808 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:22,016 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:24,222 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:26,428 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:26,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38189] regionserver.HRegion(9158): Flush requested on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:53:26,428 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 93b8c297a22e1f8cb31d38047fc60698 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:53:26,631 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:26,654 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/e5dad12354e84036872728a717519460 2023-05-24 16:53:26,665 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/e5dad12354e84036872728a717519460 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460 2023-05-24 16:53:26,676 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460, entries=7, sequenceid=21, filesize=12.1 K 2023-05-24 16:53:26,879 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:26,880 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 93b8c297a22e1f8cb31d38047fc60698 in 451ms, sequenceid=21, compaction requested=false 2023-05-24 16:53:26,881 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:53:26,881 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-24 16:53:26,881 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:53:26,884 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01 because midkey is the same as first or last row 2023-05-24 16:53:28,634 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:30,838 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:30,839 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38189%2C1684947166962:(num 1684947168241) roll requested 2023-05-24 16:53:30,839 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:31,054 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK], DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK]] 2023-05-24 16:53:31,056 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947168241 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947210840 2023-05-24 16:53:31,057 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:31,057 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947168241 is not closed yet, will try archiving it next time 2023-05-24 16:53:40,860 INFO [Listener at localhost.localdomain/41887] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 16:53:45,864 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:45,864 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:45,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38189] regionserver.HRegion(9158): Flush requested on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:53:45,864 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38189%2C1684947166962:(num 1684947210840) roll requested 2023-05-24 16:53:45,865 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 93b8c297a22e1f8cb31d38047fc60698 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:53:47,866 INFO [Listener at localhost.localdomain/41887] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 16:53:50,868 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:50,869 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:50,883 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:50,883 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:50,884 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947210840 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947225864 2023-05-24 16:53:50,885 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35029,DS-cac45434-4c5e-469b-875c-2e8936d24e6c,DISK], DatanodeInfoWithStorage[127.0.0.1:38617,DS-82003e4a-47c2-4891-83de-ef6128a53f06,DISK]] 2023-05-24 16:53:50,885 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962/jenkins-hbase20.apache.org%2C38189%2C1684947166962.1684947210840 is not closed yet, will try archiving it next time 2023-05-24 16:53:50,897 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/891812f9e7dd4067a1521c2c6a459129 2023-05-24 16:53:50,908 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/891812f9e7dd4067a1521c2c6a459129 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129 2023-05-24 16:53:50,917 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129, entries=7, sequenceid=31, filesize=12.1 K 2023-05-24 16:53:50,920 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 93b8c297a22e1f8cb31d38047fc60698 in 5056ms, sequenceid=31, compaction requested=true 2023-05-24 16:53:50,920 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:53:50,920 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-24 16:53:50,921 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:53:50,921 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01 because midkey is the same as first or last row 2023-05-24 16:53:50,923 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:53:50,923 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:53:50,927 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:53:50,929 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HStore(1912): 93b8c297a22e1f8cb31d38047fc60698/info is initiating minor compaction (all files) 2023-05-24 16:53:50,929 INFO [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 93b8c297a22e1f8cb31d38047fc60698/info in TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:53:50,929 INFO [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129] into tmpdir=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp, totalSize=36.3 K 2023-05-24 16:53:50,931 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] compactions.Compactor(207): Compacting c19e797a3a75471a821688691346fb01, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684947179541 2023-05-24 16:53:50,932 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] compactions.Compactor(207): Compacting e5dad12354e84036872728a717519460, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1684947193585 2023-05-24 16:53:50,932 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] compactions.Compactor(207): Compacting 891812f9e7dd4067a1521c2c6a459129, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1684947208431 2023-05-24 16:53:50,960 INFO [RS:0;jenkins-hbase20:38189-shortCompactions-0] throttle.PressureAwareThroughputController(145): 93b8c297a22e1f8cb31d38047fc60698#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:53:50,981 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/c2a25ac8129d4bbb87731e165d6ec900 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c2a25ac8129d4bbb87731e165d6ec900 2023-05-24 16:53:50,997 INFO [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 93b8c297a22e1f8cb31d38047fc60698/info of 93b8c297a22e1f8cb31d38047fc60698 into c2a25ac8129d4bbb87731e165d6ec900(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:53:50,997 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:53:50,997 INFO [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698., storeName=93b8c297a22e1f8cb31d38047fc60698/info, priority=13, startTime=1684947230922; duration=0sec 2023-05-24 16:53:50,998 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-24 16:53:50,999 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:53:50,999 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c2a25ac8129d4bbb87731e165d6ec900 because midkey is the same as first or last row 2023-05-24 16:53:50,999 DEBUG [RS:0;jenkins-hbase20:38189-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:54:02,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38189] regionserver.HRegion(9158): Flush requested on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:54:02,992 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 93b8c297a22e1f8cb31d38047fc60698 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:54:03,019 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/be25350c646e4caf900dd52125e0f772 2023-05-24 16:54:03,028 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/be25350c646e4caf900dd52125e0f772 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/be25350c646e4caf900dd52125e0f772 2023-05-24 16:54:03,034 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/be25350c646e4caf900dd52125e0f772, entries=7, sequenceid=42, filesize=12.1 K 2023-05-24 16:54:03,036 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 93b8c297a22e1f8cb31d38047fc60698 in 43ms, sequenceid=42, compaction requested=false 2023-05-24 16:54:03,036 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:54:03,036 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-24 16:54:03,036 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:54:03,036 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c2a25ac8129d4bbb87731e165d6ec900 because midkey is the same as first or last row 2023-05-24 16:54:11,009 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:54:11,012 INFO [Listener at localhost.localdomain/41887] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 16:54:11,013 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2074ac41 to 127.0.0.1:62237 2023-05-24 16:54:11,013 DEBUG [Listener at localhost.localdomain/41887] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:54:11,015 DEBUG [Listener at localhost.localdomain/41887] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:54:11,015 DEBUG [Listener at localhost.localdomain/41887] util.JVMClusterUtil(257): Found active master hash=2022262167, stopped=false 2023-05-24 16:54:11,015 INFO [Listener at localhost.localdomain/41887] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:54:11,017 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:54:11,017 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:54:11,018 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:11,018 INFO [Listener at localhost.localdomain/41887] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:54:11,019 DEBUG [Listener at localhost.localdomain/41887] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7038e785 to 127.0.0.1:62237 2023-05-24 16:54:11,019 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:54:11,019 DEBUG [Listener at localhost.localdomain/41887] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:54:11,020 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:54:11,020 INFO [Listener at localhost.localdomain/41887] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38189,1684947166962' ***** 2023-05-24 16:54:11,020 INFO [Listener at localhost.localdomain/41887] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:54:11,020 INFO [RS:0;jenkins-hbase20:38189] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:54:11,020 INFO [RS:0;jenkins-hbase20:38189] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:54:11,020 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:54:11,021 INFO [RS:0;jenkins-hbase20:38189] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:54:11,021 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(3303): Received CLOSE for c6db801c5215af631716fed6aab54d35 2023-05-24 16:54:11,022 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(3303): Received CLOSE for 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:54:11,022 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:54:11,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c6db801c5215af631716fed6aab54d35, disabling compactions & flushes 2023-05-24 16:54:11,023 DEBUG [RS:0;jenkins-hbase20:38189] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ac087cc to 127.0.0.1:62237 2023-05-24 16:54:11,023 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:54:11,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:54:11,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:54:11,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. after waiting 0 ms 2023-05-24 16:54:11,023 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:54:11,023 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:54:11,023 INFO [RS:0;jenkins-hbase20:38189] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:54:11,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:54:11,023 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:54:11,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c6db801c5215af631716fed6aab54d35 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:54:11,024 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 16:54:11,024 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1478): Online Regions={c6db801c5215af631716fed6aab54d35=hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35., 1588230740=hbase:meta,,1.1588230740, 93b8c297a22e1f8cb31d38047fc60698=TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.} 2023-05-24 16:54:11,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:54:11,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:54:11,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:54:11,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:54:11,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:54:11,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-24 16:54:11,026 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1504): Waiting on 1588230740, 93b8c297a22e1f8cb31d38047fc60698, c6db801c5215af631716fed6aab54d35 2023-05-24 16:54:11,048 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/.tmp/info/28f95018a5964560b2bc249966937fb6 2023-05-24 16:54:11,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/.tmp/info/f107b9bb7a7d467e9620e4b6a09df022 2023-05-24 16:54:11,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/.tmp/info/f107b9bb7a7d467e9620e4b6a09df022 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/info/f107b9bb7a7d467e9620e4b6a09df022 2023-05-24 16:54:11,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/info/f107b9bb7a7d467e9620e4b6a09df022, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 16:54:11,073 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/.tmp/table/025cf494a39244d6893abfef23a996e8 2023-05-24 16:54:11,074 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c6db801c5215af631716fed6aab54d35 in 51ms, sequenceid=6, compaction requested=false 2023-05-24 16:54:11,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/namespace/c6db801c5215af631716fed6aab54d35/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 16:54:11,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c6db801c5215af631716fed6aab54d35: 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684947168595.c6db801c5215af631716fed6aab54d35. 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 93b8c297a22e1f8cb31d38047fc60698, disabling compactions & flushes 2023-05-24 16:54:11,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. after waiting 0 ms 2023-05-24 16:54:11,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:54:11,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 93b8c297a22e1f8cb31d38047fc60698 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-24 16:54:11,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/.tmp/info/28f95018a5964560b2bc249966937fb6 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/info/28f95018a5964560b2bc249966937fb6 2023-05-24 16:54:11,093 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/info/28f95018a5964560b2bc249966937fb6, entries=20, sequenceid=14, filesize=7.4 K 2023-05-24 16:54:11,096 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/.tmp/table/025cf494a39244d6893abfef23a996e8 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/table/025cf494a39244d6893abfef23a996e8 2023-05-24 16:54:11,116 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/table/025cf494a39244d6893abfef23a996e8, entries=4, sequenceid=14, filesize=4.8 K 2023-05-24 16:54:11,118 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 93ms, sequenceid=14, compaction requested=false 2023-05-24 16:54:11,134 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-24 16:54:11,135 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 16:54:11,137 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:54:11,137 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:54:11,137 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 16:54:11,169 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 16:54:11,169 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 16:54:11,227 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1504): Waiting on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:54:11,427 DEBUG [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1504): Waiting on 93b8c297a22e1f8cb31d38047fc60698 2023-05-24 16:54:11,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/6ccf2cdeed7b4a7ca27382629df91103 2023-05-24 16:54:11,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/.tmp/info/6ccf2cdeed7b4a7ca27382629df91103 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/6ccf2cdeed7b4a7ca27382629df91103 2023-05-24 16:54:11,528 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/6ccf2cdeed7b4a7ca27382629df91103, entries=3, sequenceid=48, filesize=7.9 K 2023-05-24 16:54:11,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 93b8c297a22e1f8cb31d38047fc60698 in 445ms, sequenceid=48, compaction requested=true 2023-05-24 16:54:11,531 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129] to archive 2023-05-24 16:54:11,533 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 16:54:11,538 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01 to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/archive/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/c19e797a3a75471a821688691346fb01 2023-05-24 16:54:11,540 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460 to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/archive/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/e5dad12354e84036872728a717519460 2023-05-24 16:54:11,542 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129 to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/archive/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/info/891812f9e7dd4067a1521c2c6a459129 2023-05-24 16:54:11,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/data/default/TestLogRolling-testSlowSyncLogRolling/93b8c297a22e1f8cb31d38047fc60698/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-24 16:54:11,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:54:11,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 93b8c297a22e1f8cb31d38047fc60698: 2023-05-24 16:54:11,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1684947169484.93b8c297a22e1f8cb31d38047fc60698. 2023-05-24 16:54:11,628 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38189,1684947166962; all regions closed. 2023-05-24 16:54:11,629 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:54:11,639 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/oldWALs 2023-05-24 16:54:11,639 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38189%2C1684947166962.meta:.meta(num 1684947168357) 2023-05-24 16:54:11,639 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/WALs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:54:12,060 DEBUG [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/oldWALs 2023-05-24 16:54:12,060 INFO [RS:0;jenkins-hbase20:38189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C38189%2C1684947166962:(num 1684947225864) 2023-05-24 16:54:12,060 DEBUG [RS:0;jenkins-hbase20:38189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:54:12,060 INFO [RS:0;jenkins-hbase20:38189] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:54:12,060 INFO [RS:0;jenkins-hbase20:38189] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 16:54:12,061 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:54:12,062 INFO [RS:0;jenkins-hbase20:38189] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38189 2023-05-24 16:54:12,070 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38189,1684947166962 2023-05-24 16:54:12,070 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:54:12,070 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:54:12,072 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38189,1684947166962] 2023-05-24 16:54:12,072 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38189,1684947166962; numProcessing=1 2023-05-24 16:54:12,073 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38189,1684947166962 already deleted, retry=false 2023-05-24 16:54:12,073 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38189,1684947166962 expired; onlineServers=0 2023-05-24 16:54:12,073 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,42189,1684947165857' ***** 2023-05-24 16:54:12,073 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:54:12,074 DEBUG [M:0;jenkins-hbase20:42189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39b44fac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:54:12,074 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:54:12,074 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,42189,1684947165857; all regions closed. 2023-05-24 16:54:12,074 DEBUG [M:0;jenkins-hbase20:42189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:54:12,074 DEBUG [M:0;jenkins-hbase20:42189] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:54:12,074 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:54:12,075 DEBUG [M:0;jenkins-hbase20:42189] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:54:12,075 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947167869] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947167869,5,FailOnTimeoutGroup] 2023-05-24 16:54:12,075 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947167880] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947167880,5,FailOnTimeoutGroup] 2023-05-24 16:54:12,075 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:54:12,075 INFO [M:0;jenkins-hbase20:42189] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:54:12,075 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:12,075 INFO [M:0;jenkins-hbase20:42189] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:54:12,076 INFO [M:0;jenkins-hbase20:42189] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:54:12,076 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:54:12,076 DEBUG [M:0;jenkins-hbase20:42189] master.HMaster(1512): Stopping service threads 2023-05-24 16:54:12,076 INFO [M:0;jenkins-hbase20:42189] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:54:12,077 INFO [M:0;jenkins-hbase20:42189] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:54:12,077 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:54:12,077 DEBUG [M:0;jenkins-hbase20:42189] zookeeper.ZKUtil(398): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:54:12,077 WARN [M:0;jenkins-hbase20:42189] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:54:12,077 INFO [M:0;jenkins-hbase20:42189] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:54:12,078 INFO [M:0;jenkins-hbase20:42189] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:54:12,078 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:54:12,078 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:12,078 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:12,078 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:54:12,078 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:12,078 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-05-24 16:54:12,096 INFO [M:0;jenkins-hbase20:42189] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/736ffb0198b5401d99370f461620eae6 2023-05-24 16:54:12,096 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:54:12,102 INFO [M:0;jenkins-hbase20:42189] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 736ffb0198b5401d99370f461620eae6 2023-05-24 16:54:12,104 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/736ffb0198b5401d99370f461620eae6 as hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/736ffb0198b5401d99370f461620eae6 2023-05-24 16:54:12,110 INFO [M:0;jenkins-hbase20:42189] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 736ffb0198b5401d99370f461620eae6 2023-05-24 16:54:12,110 INFO [M:0;jenkins-hbase20:42189] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/736ffb0198b5401d99370f461620eae6, entries=11, sequenceid=100, filesize=6.1 K 2023-05-24 16:54:12,111 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=100, compaction requested=false 2023-05-24 16:54:12,113 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:12,113 DEBUG [M:0;jenkins-hbase20:42189] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:54:12,114 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/MasterData/WALs/jenkins-hbase20.apache.org,42189,1684947165857 2023-05-24 16:54:12,118 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:54:12,118 INFO [M:0;jenkins-hbase20:42189] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:54:12,118 INFO [M:0;jenkins-hbase20:42189] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:42189 2023-05-24 16:54:12,120 DEBUG [M:0;jenkins-hbase20:42189] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,42189,1684947165857 already deleted, retry=false 2023-05-24 16:54:12,172 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:54:12,172 INFO [RS:0;jenkins-hbase20:38189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38189,1684947166962; zookeeper connection closed. 2023-05-24 16:54:12,172 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): regionserver:38189-0x1017e6377c20001, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:54:12,173 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1b45023f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1b45023f 2023-05-24 16:54:12,174 INFO [Listener at localhost.localdomain/41887] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 16:54:12,272 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:54:12,272 INFO [M:0;jenkins-hbase20:42189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,42189,1684947165857; zookeeper connection closed. 2023-05-24 16:54:12,272 DEBUG [Listener at localhost.localdomain/41887-EventThread] zookeeper.ZKWatcher(600): master:42189-0x1017e6377c20000, quorum=127.0.0.1:62237, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:54:12,274 WARN [Listener at localhost.localdomain/41887] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:12,278 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:12,390 WARN [BP-1019449406-148.251.75.209-1684947162915 heartbeating to localhost.localdomain/127.0.0.1:42025] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:12,390 WARN [BP-1019449406-148.251.75.209-1684947162915 heartbeating to localhost.localdomain/127.0.0.1:42025] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1019449406-148.251.75.209-1684947162915 (Datanode Uuid ba61a3c9-5156-47e8-b3d8-2b38eb51dcbe) service to localhost.localdomain/127.0.0.1:42025 2023-05-24 16:54:12,392 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/dfs/data/data3/current/BP-1019449406-148.251.75.209-1684947162915] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:12,392 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/dfs/data/data4/current/BP-1019449406-148.251.75.209-1684947162915] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:12,393 WARN [Listener at localhost.localdomain/41887] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:12,395 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:12,499 WARN [BP-1019449406-148.251.75.209-1684947162915 heartbeating to localhost.localdomain/127.0.0.1:42025] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:12,499 WARN [BP-1019449406-148.251.75.209-1684947162915 heartbeating to localhost.localdomain/127.0.0.1:42025] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1019449406-148.251.75.209-1684947162915 (Datanode Uuid 20d9622d-ae3f-46b2-b827-a144967cb573) service to localhost.localdomain/127.0.0.1:42025 2023-05-24 16:54:12,500 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/dfs/data/data1/current/BP-1019449406-148.251.75.209-1684947162915] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:12,501 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/cluster_ce6252e0-a661-73c0-7450-0c31f7667cf2/dfs/data/data2/current/BP-1019449406-148.251.75.209-1684947162915] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:12,537 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:54:12,656 INFO [Listener at localhost.localdomain/41887] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:54:12,691 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:54:12,701 INFO [Listener at localhost.localdomain/41887] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:42025 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: Listener at localhost.localdomain/41887 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:42025 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:42025 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:42025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:42025 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@e7cf5aa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=442 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=125 (was 298), ProcessCount=169 (was 176), AvailableMemoryMB=10530 (was 11720) 2023-05-24 16:54:12,709 INFO [Listener at localhost.localdomain/41887] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=442, MaxFileDescriptor=60000, SystemLoadAverage=125, ProcessCount=169, AvailableMemoryMB=10530 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/hadoop.log.dir so I do NOT create it in target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/3639982c-638f-b328-0ef2-ef6c98be86db/hadoop.tmp.dir so I do NOT create it in target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2, deleteOnExit=true 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/test.cache.data in system properties and HBase conf 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:54:12,710 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:54:12,711 DEBUG [Listener at localhost.localdomain/41887] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:54:12,711 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:54:12,712 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:54:12,713 INFO [Listener at localhost.localdomain/41887] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:54:12,714 WARN [Listener at localhost.localdomain/41887] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:54:12,715 WARN [Listener at localhost.localdomain/41887] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:54:12,715 WARN [Listener at localhost.localdomain/41887] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:54:12,744 WARN [Listener at localhost.localdomain/41887] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:12,747 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:12,752 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_localdomain_36787_hdfs____.oasu7n/webapp 2023-05-24 16:54:12,826 INFO [Listener at localhost.localdomain/41887] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36787 2023-05-24 16:54:12,828 WARN [Listener at localhost.localdomain/41887] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:54:12,829 WARN [Listener at localhost.localdomain/41887] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:54:12,829 WARN [Listener at localhost.localdomain/41887] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:54:12,856 WARN [Listener at localhost.localdomain/36125] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:12,868 WARN [Listener at localhost.localdomain/36125] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:12,871 WARN [Listener at localhost.localdomain/36125] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:12,873 INFO [Listener at localhost.localdomain/36125] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:12,879 INFO [Listener at localhost.localdomain/36125] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_34101_datanode____.mxntm0/webapp 2023-05-24 16:54:12,949 INFO [Listener at localhost.localdomain/36125] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34101 2023-05-24 16:54:12,956 WARN [Listener at localhost.localdomain/35051] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:12,970 WARN [Listener at localhost.localdomain/35051] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:12,972 WARN [Listener at localhost.localdomain/35051] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:12,973 INFO [Listener at localhost.localdomain/35051] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:12,977 INFO [Listener at localhost.localdomain/35051] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_41795_datanode____8ogqu5/webapp 2023-05-24 16:54:13,041 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59e91146e2b48785: Processing first storage report for DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8 from datanode 85e95bb2-9617-487b-bf3d-106e918b50f7 2023-05-24 16:54:13,041 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59e91146e2b48785: from storage DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8 node DatanodeRegistration(127.0.0.1:39359, datanodeUuid=85e95bb2-9617-487b-bf3d-106e918b50f7, infoPort=45091, infoSecurePort=0, ipcPort=35051, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 16:54:13,041 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59e91146e2b48785: Processing first storage report for DS-afaceda1-3a35-431e-aeb6-fbbed26f1143 from datanode 85e95bb2-9617-487b-bf3d-106e918b50f7 2023-05-24 16:54:13,041 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59e91146e2b48785: from storage DS-afaceda1-3a35-431e-aeb6-fbbed26f1143 node DatanodeRegistration(127.0.0.1:39359, datanodeUuid=85e95bb2-9617-487b-bf3d-106e918b50f7, infoPort=45091, infoSecurePort=0, ipcPort=35051, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:13,059 INFO [Listener at localhost.localdomain/35051] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41795 2023-05-24 16:54:13,066 WARN [Listener at localhost.localdomain/37029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:13,136 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62d721fc428c6f: Processing first storage report for DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b from datanode f420626f-2eb7-44f6-b6c6-4894c8d4d25e 2023-05-24 16:54:13,136 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62d721fc428c6f: from storage DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b node DatanodeRegistration(127.0.0.1:45819, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=33923, infoSecurePort=0, ipcPort=37029, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:13,136 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x62d721fc428c6f: Processing first storage report for DS-62268f3f-a14a-4687-ab1f-7ac9d41b461f from datanode f420626f-2eb7-44f6-b6c6-4894c8d4d25e 2023-05-24 16:54:13,136 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x62d721fc428c6f: from storage DS-62268f3f-a14a-4687-ab1f-7ac9d41b461f node DatanodeRegistration(127.0.0.1:45819, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=33923, infoSecurePort=0, ipcPort=37029, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:13,181 DEBUG [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b 2023-05-24 16:54:13,184 INFO [Listener at localhost.localdomain/37029] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/zookeeper_0, clientPort=56930, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:54:13,186 INFO [Listener at localhost.localdomain/37029] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56930 2023-05-24 16:54:13,186 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,187 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,206 INFO [Listener at localhost.localdomain/37029] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da with version=8 2023-05-24 16:54:13,206 INFO [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:54:13,208 INFO [Listener at localhost.localdomain/37029] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:54:13,208 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,208 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,209 INFO [Listener at localhost.localdomain/37029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:54:13,209 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,209 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:54:13,209 INFO [Listener at localhost.localdomain/37029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:54:13,211 INFO [Listener at localhost.localdomain/37029] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44841 2023-05-24 16:54:13,211 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,213 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,215 INFO [Listener at localhost.localdomain/37029] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44841 connecting to ZooKeeper ensemble=127.0.0.1:56930 2023-05-24 16:54:13,220 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:448410x0, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:54:13,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44841-0x1017e64cfeb0000 connected 2023-05-24 16:54:13,233 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:54:13,233 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:54:13,234 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:54:13,234 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44841 2023-05-24 16:54:13,234 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44841 2023-05-24 16:54:13,235 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44841 2023-05-24 16:54:13,235 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44841 2023-05-24 16:54:13,235 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44841 2023-05-24 16:54:13,235 INFO [Listener at localhost.localdomain/37029] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da, hbase.cluster.distributed=false 2023-05-24 16:54:13,248 INFO [Listener at localhost.localdomain/37029] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:54:13,249 INFO [Listener at localhost.localdomain/37029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:54:13,250 INFO [Listener at localhost.localdomain/37029] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36833 2023-05-24 16:54:13,250 INFO [Listener at localhost.localdomain/37029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:54:13,251 DEBUG [Listener at localhost.localdomain/37029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:54:13,252 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,253 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,253 INFO [Listener at localhost.localdomain/37029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36833 connecting to ZooKeeper ensemble=127.0.0.1:56930 2023-05-24 16:54:13,260 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:368330x0, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:54:13,261 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): regionserver:368330x0, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:54:13,262 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36833-0x1017e64cfeb0001 connected 2023-05-24 16:54:13,262 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:54:13,263 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:54:13,264 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36833 2023-05-24 16:54:13,264 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36833 2023-05-24 16:54:13,265 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36833 2023-05-24 16:54:13,265 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36833 2023-05-24 16:54:13,265 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36833 2023-05-24 16:54:13,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,280 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:54:13,281 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,293 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:54:13,293 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:54:13,294 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,295 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:54:13,296 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:54:13,297 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44841,1684947253207 from backup master directory 2023-05-24 16:54:13,302 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,302 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:54:13,302 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:54:13,302 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/hbase.id with ID: 46cbf2a5-3078-4e58-a1cc-aa1fa70d1a6b 2023-05-24 16:54:13,333 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:13,335 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,344 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x36c1a6f9 to 127.0.0.1:56930 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:54:13,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77b2b0eb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:54:13,353 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:54:13,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:54:13,354 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:54:13,355 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store-tmp 2023-05-24 16:54:13,369 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:13,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:54:13,370 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:13,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:13,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:54:13,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:13,370 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:54:13,370 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:54:13,371 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,374 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44841%2C1684947253207, suffix=, logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207, archiveDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/oldWALs, maxLogs=10 2023-05-24 16:54:13,381 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947253374 2023-05-24 16:54:13,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK], DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] 2023-05-24 16:54:13,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:54:13,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:13,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,384 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,386 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:54:13,387 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:54:13,388 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,389 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,390 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:54:13,396 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:54:13,397 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=846089, jitterRate=0.07585899531841278}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:54:13,397 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:54:13,397 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:54:13,399 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:54:13,399 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:54:13,400 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:54:13,400 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 16:54:13,401 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 16:54:13,401 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:54:13,404 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:54:13,405 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:54:13,416 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:54:13,416 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:54:13,417 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:54:13,417 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:54:13,417 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:54:13,419 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,419 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:54:13,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:54:13,421 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:54:13,421 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:54:13,421 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:54:13,421 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44841,1684947253207, sessionid=0x1017e64cfeb0000, setting cluster-up flag (Was=false) 2023-05-24 16:54:13,424 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,427 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:54:13,428 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,430 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,433 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:54:13,434 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:13,434 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.hbase-snapshot/.tmp 2023-05-24 16:54:13,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:54:13,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:54:13,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:54:13,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:54:13,437 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:54:13,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:54:13,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:54:13,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947283442 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:54:13,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:54:13,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:54:13,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:54:13,444 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:54:13,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:54:13,444 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:54:13,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:54:13,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947253444,5,FailOnTimeoutGroup] 2023-05-24 16:54:13,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947253445,5,FailOnTimeoutGroup] 2023-05-24 16:54:13,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:54:13,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,446 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:54:13,460 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:54:13,460 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:54:13,460 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da 2023-05-24 16:54:13,468 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(951): ClusterId : 46cbf2a5-3078-4e58-a1cc-aa1fa70d1a6b 2023-05-24 16:54:13,468 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:54:13,472 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:54:13,472 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:54:13,473 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:13,474 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:54:13,475 DEBUG [RS:0;jenkins-hbase20:36833] zookeeper.ReadOnlyZKClient(139): Connect 0x6a3d4e28 to 127.0.0.1:56930 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:54:13,475 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:54:13,478 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/info 2023-05-24 16:54:13,479 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:54:13,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,480 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:54:13,481 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:54:13,482 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:54:13,482 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,483 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:54:13,484 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/table 2023-05-24 16:54:13,484 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:54:13,485 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,486 DEBUG [RS:0;jenkins-hbase20:36833] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64fabd1f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:54:13,487 DEBUG [RS:0;jenkins-hbase20:36833] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60c52558, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:54:13,488 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740 2023-05-24 16:54:13,488 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740 2023-05-24 16:54:13,490 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:54:13,491 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:54:13,493 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:54:13,494 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=752266, jitterRate=-0.04344436526298523}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:54:13,494 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:54:13,494 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:54:13,494 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:54:13,494 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:54:13,494 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:54:13,494 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:54:13,495 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:54:13,495 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:54:13,496 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36833 2023-05-24 16:54:13,496 INFO [RS:0;jenkins-hbase20:36833] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:54:13,496 INFO [RS:0;jenkins-hbase20:36833] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:54:13,496 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:54:13,497 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:54:13,497 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:54:13,497 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:54:13,497 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44841,1684947253207 with isa=jenkins-hbase20.apache.org/148.251.75.209:36833, startcode=1684947253248 2023-05-24 16:54:13,497 DEBUG [RS:0;jenkins-hbase20:36833] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:54:13,500 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:54:13,502 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42185, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:54:13,502 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:54:13,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,504 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da 2023-05-24 16:54:13,504 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36125 2023-05-24 16:54:13,504 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:54:13,505 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:54:13,506 DEBUG [RS:0;jenkins-hbase20:36833] zookeeper.ZKUtil(162): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,506 WARN [RS:0;jenkins-hbase20:36833] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:54:13,506 INFO [RS:0;jenkins-hbase20:36833] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:54:13,506 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,506 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36833,1684947253248] 2023-05-24 16:54:13,510 DEBUG [RS:0;jenkins-hbase20:36833] zookeeper.ZKUtil(162): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,511 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:54:13,511 INFO [RS:0;jenkins-hbase20:36833] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:54:13,514 INFO [RS:0;jenkins-hbase20:36833] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:54:13,514 INFO [RS:0;jenkins-hbase20:36833] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:54:13,514 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,518 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:54:13,519 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,520 DEBUG [RS:0;jenkins-hbase20:36833] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:13,521 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,521 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,521 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,531 INFO [RS:0;jenkins-hbase20:36833] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:54:13,532 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36833,1684947253248-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,543 INFO [RS:0;jenkins-hbase20:36833] regionserver.Replication(203): jenkins-hbase20.apache.org,36833,1684947253248 started 2023-05-24 16:54:13,543 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36833,1684947253248, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36833, sessionid=0x1017e64cfeb0001 2023-05-24 16:54:13,543 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:54:13,543 DEBUG [RS:0;jenkins-hbase20:36833] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,543 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36833,1684947253248' 2023-05-24 16:54:13,543 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:54:13,544 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:54:13,544 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:54:13,544 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:54:13,545 DEBUG [RS:0;jenkins-hbase20:36833] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,545 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36833,1684947253248' 2023-05-24 16:54:13,545 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:54:13,545 DEBUG [RS:0;jenkins-hbase20:36833] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:54:13,545 DEBUG [RS:0;jenkins-hbase20:36833] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:54:13,545 INFO [RS:0;jenkins-hbase20:36833] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:54:13,545 INFO [RS:0;jenkins-hbase20:36833] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:54:13,648 INFO [RS:0;jenkins-hbase20:36833] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36833%2C1684947253248, suffix=, logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248, archiveDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs, maxLogs=32 2023-05-24 16:54:13,653 DEBUG [jenkins-hbase20:44841] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:54:13,654 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36833,1684947253248, state=OPENING 2023-05-24 16:54:13,655 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:54:13,656 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:13,657 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:54:13,657 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36833,1684947253248}] 2023-05-24 16:54:13,661 INFO [RS:0;jenkins-hbase20:36833] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947253650 2023-05-24 16:54:13,661 DEBUG [RS:0;jenkins-hbase20:36833] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]] 2023-05-24 16:54:13,812 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:13,812 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:54:13,815 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:54:13,820 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:54:13,820 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:54:13,823 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248, archiveDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs, maxLogs=32 2023-05-24 16:54:13,838 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947253825.meta 2023-05-24 16:54:13,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK], DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] 2023-05-24 16:54:13,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:54:13,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:54:13,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:54:13,840 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:54:13,840 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:54:13,840 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:13,840 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:54:13,840 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:54:13,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:54:13,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/info 2023-05-24 16:54:13,844 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/info 2023-05-24 16:54:13,845 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:54:13,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:54:13,848 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:54:13,848 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:54:13,848 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:54:13,849 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,849 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:54:13,851 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/table 2023-05-24 16:54:13,851 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740/table 2023-05-24 16:54:13,852 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:54:13,853 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:13,855 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740 2023-05-24 16:54:13,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/meta/1588230740 2023-05-24 16:54:13,861 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:54:13,865 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:54:13,867 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=724599, jitterRate=-0.07862535119056702}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:54:13,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:54:13,871 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947253812 2023-05-24 16:54:13,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:54:13,878 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:54:13,879 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36833,1684947253248, state=OPEN 2023-05-24 16:54:13,881 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:54:13,881 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:54:13,886 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:54:13,886 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36833,1684947253248 in 224 msec 2023-05-24 16:54:13,889 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:54:13,889 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 389 msec 2023-05-24 16:54:13,891 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 455 msec 2023-05-24 16:54:13,892 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947253892, completionTime=-1 2023-05-24 16:54:13,892 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:54:13,892 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:54:13,895 DEBUG [hconnection-0x7a2d9323-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:54:13,897 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54400, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:54:13,898 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:54:13,898 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947313898 2023-05-24 16:54:13,898 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947373898 2023-05-24 16:54:13,898 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44841,1684947253207-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44841,1684947253207-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44841,1684947253207-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44841, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:54:13,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:54:13,905 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:54:13,906 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:54:13,908 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:54:13,909 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:54:13,911 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:13,912 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d empty. 2023-05-24 16:54:13,912 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:13,912 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:54:13,928 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:54:13,930 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3a26b3da7513119af27b6153a0b44b6d, NAME => 'hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 3a26b3da7513119af27b6153a0b44b6d, disabling compactions & flushes 2023-05-24 16:54:13,942 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. after waiting 0 ms 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:13,942 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:13,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 3a26b3da7513119af27b6153a0b44b6d: 2023-05-24 16:54:13,945 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:54:13,947 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947253947"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947253947"}]},"ts":"1684947253947"} 2023-05-24 16:54:13,950 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:54:13,951 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:54:13,952 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947253951"}]},"ts":"1684947253951"} 2023-05-24 16:54:13,953 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:54:13,957 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3a26b3da7513119af27b6153a0b44b6d, ASSIGN}] 2023-05-24 16:54:13,959 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=3a26b3da7513119af27b6153a0b44b6d, ASSIGN 2023-05-24 16:54:13,960 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=3a26b3da7513119af27b6153a0b44b6d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36833,1684947253248; forceNewPlan=false, retain=false 2023-05-24 16:54:14,112 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3a26b3da7513119af27b6153a0b44b6d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:14,113 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947254112"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947254112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947254112"}]},"ts":"1684947254112"} 2023-05-24 16:54:14,119 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 3a26b3da7513119af27b6153a0b44b6d, server=jenkins-hbase20.apache.org,36833,1684947253248}] 2023-05-24 16:54:14,283 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:14,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3a26b3da7513119af27b6153a0b44b6d, NAME => 'hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:54:14,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:14,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,286 INFO [StoreOpener-3a26b3da7513119af27b6153a0b44b6d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,287 DEBUG [StoreOpener-3a26b3da7513119af27b6153a0b44b6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d/info 2023-05-24 16:54:14,287 DEBUG [StoreOpener-3a26b3da7513119af27b6153a0b44b6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d/info 2023-05-24 16:54:14,288 INFO [StoreOpener-3a26b3da7513119af27b6153a0b44b6d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3a26b3da7513119af27b6153a0b44b6d columnFamilyName info 2023-05-24 16:54:14,288 INFO [StoreOpener-3a26b3da7513119af27b6153a0b44b6d-1] regionserver.HStore(310): Store=3a26b3da7513119af27b6153a0b44b6d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:14,290 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,291 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,294 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:54:14,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/hbase/namespace/3a26b3da7513119af27b6153a0b44b6d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:54:14,296 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 3a26b3da7513119af27b6153a0b44b6d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=768617, jitterRate=-0.022653654217720032}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:54:14,296 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 3a26b3da7513119af27b6153a0b44b6d: 2023-05-24 16:54:14,298 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d., pid=6, masterSystemTime=1684947254275 2023-05-24 16:54:14,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:14,300 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:54:14,301 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=3a26b3da7513119af27b6153a0b44b6d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:14,301 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947254301"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947254301"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947254301"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947254301"}]},"ts":"1684947254301"} 2023-05-24 16:54:14,306 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:54:14,306 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 3a26b3da7513119af27b6153a0b44b6d, server=jenkins-hbase20.apache.org,36833,1684947253248 in 184 msec 2023-05-24 16:54:14,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:54:14,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=3a26b3da7513119af27b6153a0b44b6d, ASSIGN in 349 msec 2023-05-24 16:54:14,310 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:54:14,310 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947254310"}]},"ts":"1684947254310"} 2023-05-24 16:54:14,312 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:54:14,314 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:54:14,316 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 410 msec 2023-05-24 16:54:14,407 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:54:14,408 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:54:14,409 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:14,415 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:54:14,427 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:54:14,433 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-05-24 16:54:14,438 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:54:14,452 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:54:14,456 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-05-24 16:54:14,464 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:54:14,466 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.164sec 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44841,1684947253207-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:54:14,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44841,1684947253207-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:54:14,468 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ReadOnlyZKClient(139): Connect 0x5db06799 to 127.0.0.1:56930 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:54:14,470 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:54:14,477 DEBUG [Listener at localhost.localdomain/37029] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c0da3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:54:14,479 DEBUG [hconnection-0x711390a1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:54:14,481 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54404, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:54:14,484 INFO [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:54:14,485 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:14,488 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:54:14,488 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:54:14,489 INFO [Listener at localhost.localdomain/37029] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:54:14,500 INFO [Listener at localhost.localdomain/37029] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:54:14,500 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:14,501 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:14,501 INFO [Listener at localhost.localdomain/37029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:54:14,501 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:54:14,501 INFO [Listener at localhost.localdomain/37029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:54:14,501 INFO [Listener at localhost.localdomain/37029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:54:14,502 INFO [Listener at localhost.localdomain/37029] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46003 2023-05-24 16:54:14,503 INFO [Listener at localhost.localdomain/37029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:54:14,504 DEBUG [Listener at localhost.localdomain/37029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:54:14,504 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:14,505 INFO [Listener at localhost.localdomain/37029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:54:14,506 INFO [Listener at localhost.localdomain/37029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46003 connecting to ZooKeeper ensemble=127.0.0.1:56930 2023-05-24 16:54:14,509 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:460030x0, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:54:14,511 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(162): regionserver:460030x0, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:54:14,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46003-0x1017e64cfeb0005 connected 2023-05-24 16:54:14,512 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(162): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-24 16:54:14,513 DEBUG [Listener at localhost.localdomain/37029] zookeeper.ZKUtil(164): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:54:14,513 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46003 2023-05-24 16:54:14,514 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46003 2023-05-24 16:54:14,516 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46003 2023-05-24 16:54:14,517 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46003 2023-05-24 16:54:14,518 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46003 2023-05-24 16:54:14,522 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(951): ClusterId : 46cbf2a5-3078-4e58-a1cc-aa1fa70d1a6b 2023-05-24 16:54:14,523 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:54:14,525 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:54:14,525 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:54:14,527 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:54:14,528 DEBUG [RS:1;jenkins-hbase20:46003] zookeeper.ReadOnlyZKClient(139): Connect 0x0cbb56d7 to 127.0.0.1:56930 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:54:14,539 DEBUG [RS:1;jenkins-hbase20:46003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27ca930, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:54:14,539 DEBUG [RS:1;jenkins-hbase20:46003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6433da7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:54:14,547 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:46003 2023-05-24 16:54:14,547 INFO [RS:1;jenkins-hbase20:46003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:54:14,548 INFO [RS:1;jenkins-hbase20:46003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:54:14,548 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:54:14,548 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44841,1684947253207 with isa=jenkins-hbase20.apache.org/148.251.75.209:46003, startcode=1684947254500 2023-05-24 16:54:14,549 DEBUG [RS:1;jenkins-hbase20:46003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:54:14,553 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49343, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:54:14,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,554 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da 2023-05-24 16:54:14,554 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36125 2023-05-24 16:54:14,554 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:54:14,555 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:54:14,555 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:54:14,556 DEBUG [RS:1;jenkins-hbase20:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,556 WARN [RS:1;jenkins-hbase20:46003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:54:14,556 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46003,1684947254500] 2023-05-24 16:54:14,556 INFO [RS:1;jenkins-hbase20:46003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:54:14,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,556 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:14,561 DEBUG [RS:1;jenkins-hbase20:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,561 DEBUG [RS:1;jenkins-hbase20:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:54:14,562 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:54:14,562 INFO [RS:1;jenkins-hbase20:46003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:54:14,564 INFO [RS:1;jenkins-hbase20:46003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:54:14,565 INFO [RS:1;jenkins-hbase20:46003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:54:14,565 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,565 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:54:14,567 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,567 DEBUG [RS:1;jenkins-hbase20:46003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:54:14,568 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,568 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,568 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,580 INFO [RS:1;jenkins-hbase20:46003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:54:14,580 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46003,1684947254500-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:54:14,589 INFO [RS:1;jenkins-hbase20:46003] regionserver.Replication(203): jenkins-hbase20.apache.org,46003,1684947254500 started 2023-05-24 16:54:14,589 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46003,1684947254500, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46003, sessionid=0x1017e64cfeb0005 2023-05-24 16:54:14,589 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:54:14,589 INFO [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:46003,5,FailOnTimeoutGroup] 2023-05-24 16:54:14,589 DEBUG [RS:1;jenkins-hbase20:46003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,590 INFO [Listener at localhost.localdomain/37029] wal.TestLogRolling(323): Replication=2 2023-05-24 16:54:14,590 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46003,1684947254500' 2023-05-24 16:54:14,590 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:54:14,591 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:54:14,592 DEBUG [Listener at localhost.localdomain/37029] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 16:54:14,592 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:54:14,592 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:54:14,592 DEBUG [RS:1;jenkins-hbase20:46003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,593 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46003,1684947254500' 2023-05-24 16:54:14,593 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:54:14,593 DEBUG [RS:1;jenkins-hbase20:46003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:54:14,594 DEBUG [RS:1;jenkins-hbase20:46003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:54:14,594 INFO [RS:1;jenkins-hbase20:46003] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:54:14,594 INFO [RS:1;jenkins-hbase20:46003] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:54:14,595 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59488, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 16:54:14,597 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 16:54:14,597 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 16:54:14,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:54:14,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-24 16:54:14,602 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:54:14,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-24 16:54:14,603 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:54:14,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:54:14,605 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:14,606 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd empty. 2023-05-24 16:54:14,607 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:14,607 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-24 16:54:14,627 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-24 16:54:14,628 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => e42755a5d8bc4a869b6e1bc60d5fa9dd, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/.tmp 2023-05-24 16:54:14,643 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:14,644 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing e42755a5d8bc4a869b6e1bc60d5fa9dd, disabling compactions & flushes 2023-05-24 16:54:14,644 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:14,644 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:14,644 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. after waiting 0 ms 2023-05-24 16:54:14,644 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:14,644 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:14,644 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for e42755a5d8bc4a869b6e1bc60d5fa9dd: 2023-05-24 16:54:14,647 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:54:14,649 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684947254649"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947254649"}]},"ts":"1684947254649"} 2023-05-24 16:54:14,651 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:54:14,652 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:54:14,653 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947254653"}]},"ts":"1684947254653"} 2023-05-24 16:54:14,654 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-24 16:54:14,661 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-05-24 16:54:14,663 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-24 16:54:14,663 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-24 16:54:14,663 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-24 16:54:14,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=e42755a5d8bc4a869b6e1bc60d5fa9dd, ASSIGN}] 2023-05-24 16:54:14,666 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=e42755a5d8bc4a869b6e1bc60d5fa9dd, ASSIGN 2023-05-24 16:54:14,667 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=e42755a5d8bc4a869b6e1bc60d5fa9dd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46003,1684947254500; forceNewPlan=false, retain=false 2023-05-24 16:54:14,698 INFO [RS:1;jenkins-hbase20:46003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46003%2C1684947254500, suffix=, logDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500, archiveDir=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs, maxLogs=32 2023-05-24 16:54:14,717 INFO [RS:1;jenkins-hbase20:46003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 2023-05-24 16:54:14,717 DEBUG [RS:1;jenkins-hbase20:46003] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]] 2023-05-24 16:54:14,822 INFO [jenkins-hbase20:44841] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-24 16:54:14,823 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=e42755a5d8bc4a869b6e1bc60d5fa9dd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,824 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684947254823"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947254823"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947254823"}]},"ts":"1684947254823"} 2023-05-24 16:54:14,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure e42755a5d8bc4a869b6e1bc60d5fa9dd, server=jenkins-hbase20.apache.org,46003,1684947254500}] 2023-05-24 16:54:14,984 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:14,984 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:54:14,990 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52084, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:54:14,998 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:14,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e42755a5d8bc4a869b6e1bc60d5fa9dd, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:54:14,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:54:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,002 INFO [StoreOpener-e42755a5d8bc4a869b6e1bc60d5fa9dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,005 DEBUG [StoreOpener-e42755a5d8bc4a869b6e1bc60d5fa9dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info 2023-05-24 16:54:15,005 DEBUG [StoreOpener-e42755a5d8bc4a869b6e1bc60d5fa9dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info 2023-05-24 16:54:15,006 INFO [StoreOpener-e42755a5d8bc4a869b6e1bc60d5fa9dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e42755a5d8bc4a869b6e1bc60d5fa9dd columnFamilyName info 2023-05-24 16:54:15,007 INFO [StoreOpener-e42755a5d8bc4a869b6e1bc60d5fa9dd-1] regionserver.HStore(310): Store=e42755a5d8bc4a869b6e1bc60d5fa9dd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:54:15,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:15,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:54:15,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e42755a5d8bc4a869b6e1bc60d5fa9dd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872808, jitterRate=0.10983359813690186}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:54:15,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e42755a5d8bc4a869b6e1bc60d5fa9dd: 2023-05-24 16:54:15,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd., pid=11, masterSystemTime=1684947254984 2023-05-24 16:54:15,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:15,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:15,026 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=e42755a5d8bc4a869b6e1bc60d5fa9dd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:54:15,026 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684947255026"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947255026"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947255026"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947255026"}]},"ts":"1684947255026"} 2023-05-24 16:54:15,032 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 16:54:15,032 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure e42755a5d8bc4a869b6e1bc60d5fa9dd, server=jenkins-hbase20.apache.org,46003,1684947254500 in 201 msec 2023-05-24 16:54:15,035 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 16:54:15,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=e42755a5d8bc4a869b6e1bc60d5fa9dd, ASSIGN in 368 msec 2023-05-24 16:54:15,038 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:54:15,038 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947255038"}]},"ts":"1684947255038"} 2023-05-24 16:54:15,040 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-24 16:54:15,042 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:54:15,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 445 msec 2023-05-24 16:54:16,620 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:54:19,512 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 16:54:19,512 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 16:54:20,562 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-24 16:54:24,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:54:24,607 INFO [Listener at localhost.localdomain/37029] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-24 16:54:24,613 DEBUG [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-24 16:54:24,613 DEBUG [Listener at localhost.localdomain/37029] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:54:24,628 WARN [Listener at localhost.localdomain/37029] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:24,630 WARN [Listener at localhost.localdomain/37029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:24,632 INFO [Listener at localhost.localdomain/37029] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:24,638 INFO [Listener at localhost.localdomain/37029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_32965_datanode____.i886u8/webapp 2023-05-24 16:54:24,721 INFO [Listener at localhost.localdomain/37029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32965 2023-05-24 16:54:24,734 WARN [Listener at localhost.localdomain/44749] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:24,756 WARN [Listener at localhost.localdomain/44749] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:24,760 WARN [Listener at localhost.localdomain/44749] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:24,762 INFO [Listener at localhost.localdomain/44749] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:24,767 INFO [Listener at localhost.localdomain/44749] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_35081_datanode____iv2oz2/webapp 2023-05-24 16:54:24,802 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48877c97a01fe8bf: Processing first storage report for DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e from datanode 2524008e-bc44-4654-9811-9ea694770b02 2023-05-24 16:54:24,802 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48877c97a01fe8bf: from storage DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e node DatanodeRegistration(127.0.0.1:46351, datanodeUuid=2524008e-bc44-4654-9811-9ea694770b02, infoPort=36247, infoSecurePort=0, ipcPort=44749, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:24,802 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48877c97a01fe8bf: Processing first storage report for DS-8d36afb0-2f3b-428a-962e-cd7ce45647bd from datanode 2524008e-bc44-4654-9811-9ea694770b02 2023-05-24 16:54:24,802 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48877c97a01fe8bf: from storage DS-8d36afb0-2f3b-428a-962e-cd7ce45647bd node DatanodeRegistration(127.0.0.1:46351, datanodeUuid=2524008e-bc44-4654-9811-9ea694770b02, infoPort=36247, infoSecurePort=0, ipcPort=44749, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 16:54:24,854 INFO [Listener at localhost.localdomain/44749] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35081 2023-05-24 16:54:24,862 WARN [Listener at localhost.localdomain/42633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:24,877 WARN [Listener at localhost.localdomain/42633] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:24,880 WARN [Listener at localhost.localdomain/42633] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:24,882 INFO [Listener at localhost.localdomain/42633] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:24,892 INFO [Listener at localhost.localdomain/42633] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_33917_datanode____5ffdpi/webapp 2023-05-24 16:54:24,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84a8ee17024a4cde: Processing first storage report for DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a from datanode 32faa797-eaa6-456d-a3fc-a707457e92a8 2023-05-24 16:54:24,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84a8ee17024a4cde: from storage DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a node DatanodeRegistration(127.0.0.1:34585, datanodeUuid=32faa797-eaa6-456d-a3fc-a707457e92a8, infoPort=35797, infoSecurePort=0, ipcPort=42633, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:24,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84a8ee17024a4cde: Processing first storage report for DS-ea1ebcc6-9f2f-4ef2-a2ae-feae111eb97b from datanode 32faa797-eaa6-456d-a3fc-a707457e92a8 2023-05-24 16:54:24,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84a8ee17024a4cde: from storage DS-ea1ebcc6-9f2f-4ef2-a2ae-feae111eb97b node DatanodeRegistration(127.0.0.1:34585, datanodeUuid=32faa797-eaa6-456d-a3fc-a707457e92a8, infoPort=35797, infoSecurePort=0, ipcPort=42633, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:24,981 INFO [Listener at localhost.localdomain/42633] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33917 2023-05-24 16:54:24,992 WARN [Listener at localhost.localdomain/36399] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:25,108 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7e8a4d574b1036f2: Processing first storage report for DS-8e519610-c346-4c2f-ba7d-5c80194cb212 from datanode ce28f0de-27c5-4e44-90cc-e4fb6779eed2 2023-05-24 16:54:25,108 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7e8a4d574b1036f2: from storage DS-8e519610-c346-4c2f-ba7d-5c80194cb212 node DatanodeRegistration(127.0.0.1:35665, datanodeUuid=ce28f0de-27c5-4e44-90cc-e4fb6779eed2, infoPort=36373, infoSecurePort=0, ipcPort=36399, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:25,108 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7e8a4d574b1036f2: Processing first storage report for DS-355d5d10-d219-4d29-bf9b-ff64e418b67c from datanode ce28f0de-27c5-4e44-90cc-e4fb6779eed2 2023-05-24 16:54:25,109 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7e8a4d574b1036f2: from storage DS-355d5d10-d219-4d29-bf9b-ff64e418b67c node DatanodeRegistration(127.0.0.1:35665, datanodeUuid=ce28f0de-27c5-4e44-90cc-e4fb6779eed2, infoPort=36373, infoSecurePort=0, ipcPort=36399, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:25,145 WARN [Listener at localhost.localdomain/36399] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:25,148 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,148 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,163 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 16:54:25,148 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947253825.meta block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK], DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]) is bad. 2023-05-24 16:54:25,168 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]) is bad. 2023-05-24 16:54:25,168 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45819]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,164 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 16:54:25,167 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947253374 block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK], DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]) is bad. 2023-05-24 16:54:25,169 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:25,169 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947253650 block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]) is bad. 2023-05-24 16:54:25,175 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:57336 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57336 dst: /127.0.0.1:39359 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,181 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45819]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,185 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:57264 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57264 dst: /127.0.0.1:39359 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,190 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:57242 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57242 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39359 remote=/127.0.0.1:57242]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,191 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39359]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,191 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:57278 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57278 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:39359 remote=/127.0.0.1:57278]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,192 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39359]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,192 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:54530 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45819:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54530 dst: /127.0.0.1:45819 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,195 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:54576 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45819:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54576 dst: /127.0.0.1:45819 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,274 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:54614 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:45819:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54614 dst: /127.0.0.1:45819 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,274 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:25,274 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:54560 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45819:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54560 dst: /127.0.0.1:45819 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,275 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid f420626f-2eb7-44f6-b6c6-4894c8d4d25e) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:54:25,276 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data3/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:25,276 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data4/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:25,278 WARN [Listener at localhost.localdomain/36399] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:25,278 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,279 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,279 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,279 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:25,290 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:25,393 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:48902 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48902 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,393 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:48900 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48900 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,393 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:48904 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48904 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,393 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:48898 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39359:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48898 dst: /127.0.0.1:39359 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:25,394 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:25,396 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid 85e95bb2-9617-487b-bf3d-106e918b50f7) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:54:25,396 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data1/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:25,397 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data2/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:25,401 DEBUG [Listener at localhost.localdomain/36399] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:54:25,404 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:54:25,406 WARN [RS:1;jenkins-hbase20:46003.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:25,407 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C46003%2C1684947254500:(num 1684947254700) roll requested 2023-05-24 16:54:25,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46003] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:25,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46003] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:54176 deadline: 1684947275404, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-24 16:54:25,414 WARN [Thread-631] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741839_1019 2023-05-24 16:54:25,417 WARN [Thread-631] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] 2023-05-24 16:54:25,422 WARN [Thread-631] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741840_1020 2023-05-24 16:54:25,423 WARN [Thread-631] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK] 2023-05-24 16:54:25,436 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-24 16:54:25,436 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 2023-05-24 16:54:25,438 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK], DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK]] 2023-05-24 16:54:25,438 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:25,438 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 is not closed yet, will try archiving it next time 2023-05-24 16:54:25,438 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:25,439 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 to hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947254700 2023-05-24 16:54:37,517 INFO [Listener at localhost.localdomain/36399] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 2023-05-24 16:54:37,518 WARN [Listener at localhost.localdomain/36399] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:37,519 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:37,519 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 block BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK], DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK]) is bad. 2023-05-24 16:54:37,523 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:37,525 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:59358 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:46351:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59358 dst: /127.0.0.1:46351 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:46351 remote=/127.0.0.1:59358]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:37,526 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:46351]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:37,527 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:58714 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:34585:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58714 dst: /127.0.0.1:34585 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:37,630 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:37,630 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid 32faa797-eaa6-456d-a3fc-a707457e92a8) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:54:37,631 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data7/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:37,632 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data8/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:37,638 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK]] 2023-05-24 16:54:37,638 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK]] 2023-05-24 16:54:37,639 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C46003%2C1684947254500:(num 1684947265407) roll requested 2023-05-24 16:54:37,650 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:42804 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741842_1023]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data6/current]'}, localName='127.0.0.1:46351', datanodeUuid='2524008e-bc44-4654-9811-9ea694770b02', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741842_1023 to mirror 127.0.0.1:34585: java.net.ConnectException: Connection refused 2023-05-24 16:54:37,650 WARN [Thread-641] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741842_1023 2023-05-24 16:54:37,650 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:42804 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:46351:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42804 dst: /127.0.0.1:46351 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:37,659 WARN [Thread-641] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK] 2023-05-24 16:54:37,661 WARN [Thread-641] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741843_1024 2023-05-24 16:54:37,662 WARN [Thread-641] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK] 2023-05-24 16:54:37,664 WARN [Thread-641] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741844_1025 2023-05-24 16:54:37,665 WARN [Thread-641] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] 2023-05-24 16:54:37,679 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947277639 2023-05-24 16:54:37,679 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK], DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:37,679 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 is not closed yet, will try archiving it next time 2023-05-24 16:54:41,644 WARN [Listener at localhost.localdomain/36399] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:54:41,647 WARN [ResponseProcessor for block BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:54:41,649 WARN [DataStreamer for file /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947277639 block BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026] hdfs.DataStreamer(1548): Error Recovery for BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK], DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK]) is bad. 2023-05-24 16:54:41,655 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:54:41,655 WARN [PacketResponder: BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35665]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,655 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:56858 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56858 dst: /127.0.0.1:35665 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35665 remote=/127.0.0.1:56858]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,657 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:42818 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741845_1026]] datanode.DataXceiver(323): 127.0.0.1:46351:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42818 dst: /127.0.0.1:46351 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,766 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:54:41,766 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid 2524008e-bc44-4654-9811-9ea694770b02) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:54:41,767 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data5/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:41,768 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data6/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:54:41,773 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:41,773 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:41,774 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C46003%2C1684947254500:(num 1684947277639) roll requested 2023-05-24 16:54:41,777 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741846_1028 2023-05-24 16:54:41,778 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK] 2023-05-24 16:54:41,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46003] regionserver.HRegion(9158): Flush requested on e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:54:41,780 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e42755a5d8bc4a869b6e1bc60d5fa9dd 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:54:41,781 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741847_1029 2023-05-24 16:54:41,782 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] 2023-05-24 16:54:41,784 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741848_1030 2023-05-24 16:54:41,784 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:54:41,788 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741850_1032 2023-05-24 16:54:41,788 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60372 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741849_1031]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741849_1031 to mirror 127.0.0.1:39359: java.net.ConnectException: Connection refused 2023-05-24 16:54:41,788 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741849_1031 2023-05-24 16:54:41,788 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60372 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741849_1031]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60372 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,788 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK] 2023-05-24 16:54:41,789 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK] 2023-05-24 16:54:41,790 WARN [IPC Server handler 1 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 16:54:41,790 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741851_1033 2023-05-24 16:54:41,790 WARN [IPC Server handler 1 on default port 36125] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 16:54:41,791 WARN [IPC Server handler 1 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 16:54:41,791 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:54:41,799 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60394 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741853_1035]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741853_1035 to mirror 127.0.0.1:34585: java.net.ConnectException: Connection refused 2023-05-24 16:54:41,799 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741853_1035 2023-05-24 16:54:41,800 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60394 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741853_1035]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60394 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,800 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947277639 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281774 2023-05-24 16:54:41,800 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK] 2023-05-24 16:54:41,800 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:41,800 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947277639 is not closed yet, will try archiving it next time 2023-05-24 16:54:41,804 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60402 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741854_1036]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741854_1036 to mirror 127.0.0.1:45819: java.net.ConnectException: Connection refused 2023-05-24 16:54:41,804 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741854_1036 2023-05-24 16:54:41,804 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60402 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741854_1036]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60402 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,804 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] 2023-05-24 16:54:41,805 WARN [IPC Server handler 2 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 16:54:41,805 WARN [IPC Server handler 2 on default port 36125] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 16:54:41,805 WARN [IPC Server handler 2 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 16:54:41,830 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3edd4ea6] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46351, datanodeUuid=2524008e-bc44-4654-9811-9ea694770b02, infoPort=36247, infoSecurePort=0, ipcPort=44749, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741841_1022 to 127.0.0.1:39359 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:41,997 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:41,997 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:41,997 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C46003%2C1684947254500:(num 1684947281774) roll requested 2023-05-24 16:54:42,003 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60424 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741856_1038]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741856_1038 to mirror 127.0.0.1:39359: java.net.ConnectException: Connection refused 2023-05-24 16:54:42,003 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741856_1038 2023-05-24 16:54:42,004 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60424 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741856_1038]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60424 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:42,004 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK] 2023-05-24 16:54:42,006 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741857_1039 2023-05-24 16:54:42,007 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK] 2023-05-24 16:54:42,010 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60436 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741858_1040]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741858_1040 to mirror 127.0.0.1:45819: java.net.ConnectException: Connection refused 2023-05-24 16:54:42,010 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741858_1040 2023-05-24 16:54:42,010 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1260547961_17 at /127.0.0.1:60436 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741858_1040]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60436 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:42,011 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45819,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK] 2023-05-24 16:54:42,013 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741859_1041 2023-05-24 16:54:42,014 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:54:42,015 WARN [IPC Server handler 4 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 16:54:42,015 WARN [IPC Server handler 4 on default port 36125] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 16:54:42,015 WARN [IPC Server handler 4 on default port 36125] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 16:54:42,021 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281774 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281997 2023-05-24 16:54:42,021 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:54:42,021 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947277639 is not closed yet, will try archiving it next time 2023-05-24 16:54:42,021 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281774 is not closed yet, will try archiving it next time 2023-05-24 16:54:42,204 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-24 16:54:42,205 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281774 is not closed yet, will try archiving it next time 2023-05-24 16:54:42,211 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/4fefdd93844545278c16df1774b71e27 2023-05-24 16:54:42,221 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/4fefdd93844545278c16df1774b71e27 as hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/4fefdd93844545278c16df1774b71e27 2023-05-24 16:54:42,228 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/4fefdd93844545278c16df1774b71e27, entries=5, sequenceid=12, filesize=10.0 K 2023-05-24 16:54:42,229 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for e42755a5d8bc4a869b6e1bc60d5fa9dd in 449ms, sequenceid=12, compaction requested=false 2023-05-24 16:54:42,230 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e42755a5d8bc4a869b6e1bc60d5fa9dd: 2023-05-24 16:54:42,409 WARN [Listener at localhost.localdomain/36399] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:54:42,411 WARN [Listener at localhost.localdomain/36399] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:54:42,413 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:54:42,417 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/java.io.tmpdir/Jetty_localhost_42295_datanode____hmxzzr/webapp 2023-05-24 16:54:42,425 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 to hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947265407 2023-05-24 16:54:42,491 INFO [Listener at localhost.localdomain/36399] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42295 2023-05-24 16:54:42,498 WARN [Listener at localhost.localdomain/42905] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:54:42,579 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe1c4918eff63c98a: Processing first storage report for DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b from datanode f420626f-2eb7-44f6-b6c6-4894c8d4d25e 2023-05-24 16:54:42,580 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe1c4918eff63c98a: from storage DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b node DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 16:54:42,580 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe1c4918eff63c98a: Processing first storage report for DS-62268f3f-a14a-4687-ab1f-7ac9d41b461f from datanode f420626f-2eb7-44f6-b6c6-4894c8d4d25e 2023-05-24 16:54:42,580 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe1c4918eff63c98a: from storage DS-62268f3f-a14a-4687-ab1f-7ac9d41b461f node DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:54:43,112 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7ec4c9ea] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35665, datanodeUuid=ce28f0de-27c5-4e44-90cc-e4fb6779eed2, infoPort=36373, infoSecurePort=0, ipcPort=36399, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741855_1037 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:43,444 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:43,445 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44841%2C1684947253207:(num 1684947253374) roll requested 2023-05-24 16:54:43,455 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:43,455 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:43,456 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:51784 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741861_1043]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data4/current]'}, localName='127.0.0.1:38223', datanodeUuid='f420626f-2eb7-44f6-b6c6-4894c8d4d25e', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741861_1043 to mirror 127.0.0.1:46351: java.net.ConnectException: Connection refused 2023-05-24 16:54:43,456 WARN [Thread-707] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741861_1043 2023-05-24 16:54:43,456 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:51784 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741861_1043]] datanode.DataXceiver(323): 127.0.0.1:38223:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51784 dst: /127.0.0.1:38223 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:43,457 WARN [Thread-707] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:54:43,458 WARN [Thread-707] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741862_1044 2023-05-24 16:54:43,459 WARN [Thread-707] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34585,DS-59124c41-caa9-4a01-ab0a-ce76dda4f97a,DISK] 2023-05-24 16:54:43,466 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-24 16:54:43,466 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947253374 with entries=88, filesize=43.74 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947283446 2023-05-24 16:54:43,468 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK], DatanodeInfoWithStorage[127.0.0.1:38223,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]] 2023-05-24 16:54:43,468 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947253374 is not closed yet, will try archiving it next time 2023-05-24 16:54:43,468 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:43,469 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207/jenkins-hbase20.apache.org%2C44841%2C1684947253207.1684947253374; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:54:44,112 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@171d79b7] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:35665, datanodeUuid=ce28f0de-27c5-4e44-90cc-e4fb6779eed2, infoPort=36373, infoSecurePort=0, ipcPort=36399, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741852_1034 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:55,581 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@c389308] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741836_1012 to 127.0.0.1:34585 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:55,581 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2e58e7fb] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741834_1010 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:56,581 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5cd75253] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741830_1006 to 127.0.0.1:34585 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:56,581 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@327eee79] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741828_1004 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:58,582 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7a169bed] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741827_1003 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:54:58,582 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1398dc6b] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741825_1001 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:00,823 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:60650 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741864_1046 to mirror 127.0.0.1:46351: java.net.ConnectException: Connection refused 2023-05-24 16:55:00,823 WARN [Thread-723] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741864_1046 2023-05-24 16:55:00,823 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-158139837_17 at /127.0.0.1:60650 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60650 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:00,824 WARN [Thread-723] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:55:00,839 INFO [Listener at localhost.localdomain/42905] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281997 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947300816 2023-05-24 16:55:00,839 DEBUG [Listener at localhost.localdomain/42905] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38223,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK], DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK]] 2023-05-24 16:55:00,839 DEBUG [Listener at localhost.localdomain/42905] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500/jenkins-hbase20.apache.org%2C46003%2C1684947254500.1684947281997 is not closed yet, will try archiving it next time 2023-05-24 16:55:00,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46003] regionserver.HRegion(9158): Flush requested on e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:55:00,846 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e42755a5d8bc4a869b6e1bc60d5fa9dd 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-24 16:55:00,848 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-24 16:55:00,854 WARN [Thread-731] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741866_1048 2023-05-24 16:55:00,855 WARN [Thread-731] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:55:00,870 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/bf08f1b6d8644bcda65e92c10dc3d373 2023-05-24 16:55:00,877 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:55:00,877 INFO [Listener at localhost.localdomain/42905] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 16:55:00,877 DEBUG [Listener at localhost.localdomain/42905] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5db06799 to 127.0.0.1:56930 2023-05-24 16:55:00,877 DEBUG [Listener at localhost.localdomain/42905] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:00,877 DEBUG [Listener at localhost.localdomain/42905] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:55:00,877 DEBUG [Listener at localhost.localdomain/42905] util.JVMClusterUtil(257): Found active master hash=608221506, stopped=false 2023-05-24 16:55:00,877 INFO [Listener at localhost.localdomain/42905] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:00,879 INFO [Listener at localhost.localdomain/42905] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/42905] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x36c1a6f9 to 127.0.0.1:56930 2023-05-24 16:55:00,879 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/bf08f1b6d8644bcda65e92c10dc3d373 as hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/bf08f1b6d8644bcda65e92c10dc3d373 2023-05-24 16:55:00,879 DEBUG [Listener at localhost.localdomain/42905] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:00,880 INFO [Listener at localhost.localdomain/42905] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36833,1684947253248' ***** 2023-05-24 16:55:00,880 INFO [Listener at localhost.localdomain/42905] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:55:00,880 INFO [Listener at localhost.localdomain/42905] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,46003,1684947254500' ***** 2023-05-24 16:55:00,880 INFO [RS:0;jenkins-hbase20:36833] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:55:00,880 INFO [Listener at localhost.localdomain/42905] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:55:00,880 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:55:00,880 INFO [RS:0;jenkins-hbase20:36833] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:55:00,881 INFO [RS:0;jenkins-hbase20:36833] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:55:00,881 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(3303): Received CLOSE for 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:55:00,881 INFO [RS:1;jenkins-hbase20:46003] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:55:00,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:00,881 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:55:00,881 DEBUG [RS:0;jenkins-hbase20:36833] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a3d4e28 to 127.0.0.1:56930 2023-05-24 16:55:00,882 DEBUG [RS:0;jenkins-hbase20:36833] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:00,882 INFO [RS:0;jenkins-hbase20:36833] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:55:00,882 INFO [RS:0;jenkins-hbase20:36833] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:55:00,882 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:00,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:00,882 INFO [RS:0;jenkins-hbase20:36833] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:55:00,882 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:55:00,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 3a26b3da7513119af27b6153a0b44b6d, disabling compactions & flushes 2023-05-24 16:55:00,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:00,883 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-24 16:55:00,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:00,883 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1478): Online Regions={3a26b3da7513119af27b6153a0b44b6d=hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d., 1588230740=hbase:meta,,1.1588230740} 2023-05-24 16:55:00,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. after waiting 0 ms 2023-05-24 16:55:00,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:00,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 3a26b3da7513119af27b6153a0b44b6d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:55:00,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:00,883 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1504): Waiting on 1588230740, 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:55:00,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:00,884 WARN [RS:0;jenkins-hbase20:36833.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,884 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:00,884 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:00,884 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36833%2C1684947253248:(num 1684947253650) roll requested 2023-05-24 16:55:00,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 3a26b3da7513119af27b6153a0b44b6d: 2023-05-24 16:55:00,884 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:00,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-05-24 16:55:00,885 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,885 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,36833,1684947253248: Unrecoverable exception while closing hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,886 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:00,886 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-24 16:55:00,886 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 16:55:00,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-24 16:55:00,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-24 16:55:00,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-24 16:55:00,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-24 16:55:00,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1054343168, "init": 524288000, "max": 2051014656, "used": 318533584 }, "NonHeapMemoryUsage": { "committed": 133849088, "init": 2555904, "max": -1, "used": 131149016 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-24 16:55:00,897 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44841] master.MasterRpcServices(609): jenkins-hbase20.apache.org,36833,1684947253248 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,36833,1684947253248: Unrecoverable exception while closing hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,897 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/bf08f1b6d8644bcda65e92c10dc3d373, entries=8, sequenceid=25, filesize=13.2 K 2023-05-24 16:55:00,898 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for e42755a5d8bc4a869b6e1bc60d5fa9dd in 52ms, sequenceid=25, compaction requested=false 2023-05-24 16:55:00,898 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e42755a5d8bc4a869b6e1bc60d5fa9dd: 2023-05-24 16:55:00,898 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-24 16:55:00,898 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:55:00,898 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/bf08f1b6d8644bcda65e92c10dc3d373 because midkey is the same as first or last row 2023-05-24 16:55:00,898 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:55:00,898 INFO [RS:1;jenkins-hbase20:46003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:55:00,899 INFO [RS:1;jenkins-hbase20:46003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:55:00,899 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(3303): Received CLOSE for e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:55:00,899 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:43074 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741868_1050]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741868_1050 to mirror 127.0.0.1:46351: java.net.ConnectException: Connection refused 2023-05-24 16:55:00,899 WARN [Thread-739] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741868_1050 2023-05-24 16:55:00,899 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:43074 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741868_1050]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43074 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:00,900 WARN [Thread-739] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:55:00,902 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:55:00,902 DEBUG [RS:1;jenkins-hbase20:46003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0cbb56d7 to 127.0.0.1:56930 2023-05-24 16:55:00,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e42755a5d8bc4a869b6e1bc60d5fa9dd, disabling compactions & flushes 2023-05-24 16:55:00,902 DEBUG [RS:1;jenkins-hbase20:46003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:00,902 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-24 16:55:00,902 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1478): Online Regions={e42755a5d8bc4a869b6e1bc60d5fa9dd=TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd.} 2023-05-24 16:55:00,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:55:00,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:55:00,902 DEBUG [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1504): Waiting on e42755a5d8bc4a869b6e1bc60d5fa9dd 2023-05-24 16:55:00,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. after waiting 0 ms 2023-05-24 16:55:00,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:55:00,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing e42755a5d8bc4a869b6e1bc60d5fa9dd 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-24 16:55:00,915 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-24 16:55:00,915 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947253650 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947300884 2023-05-24 16:55:00,916 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK], DatanodeInfoWithStorage[127.0.0.1:38223,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]] 2023-05-24 16:55:00,916 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947253650 is not closed yet, will try archiving it next time 2023-05-24 16:55:00,916 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,916 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta:.meta(num 1684947253825) roll requested 2023-05-24 16:55:00,917 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.1684947253650; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,931 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:43096 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741871_1053]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current]'}, localName='127.0.0.1:35665', datanodeUuid='ce28f0de-27c5-4e44-90cc-e4fb6779eed2', xmitsInProgress=0}:Exception transfering block BP-609885684-148.251.75.209-1684947252717:blk_1073741871_1053 to mirror 127.0.0.1:46351: java.net.ConnectException: Connection refused 2023-05-24 16:55:00,931 WARN [Thread-751] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741871_1053 2023-05-24 16:55:00,932 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1434065613_17 at /127.0.0.1:43096 [Receiving block BP-609885684-148.251.75.209-1684947252717:blk_1073741871_1053]] datanode.DataXceiver(323): 127.0.0.1:35665:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43096 dst: /127.0.0.1:35665 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:00,933 WARN [Thread-751] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:55:00,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/beddf7d1d1be4117ab155181131be3fa 2023-05-24 16:55:00,940 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-24 16:55:00,941 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947253825.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947300917.meta 2023-05-24 16:55:00,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/.tmp/info/beddf7d1d1be4117ab155181131be3fa as hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/beddf7d1d1be4117ab155181131be3fa 2023-05-24 16:55:00,945 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35665,DS-8e519610-c346-4c2f-ba7d-5c80194cb212,DISK], DatanodeInfoWithStorage[127.0.0.1:38223,DS-f9d2a931-e7a3-4449-a8d7-ebb6d84ac42b,DISK]] 2023-05-24 16:55:00,945 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,945 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947253825.meta is not closed yet, will try archiving it next time 2023-05-24 16:55:00,945 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248/jenkins-hbase20.apache.org%2C36833%2C1684947253248.meta.1684947253825.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:39359,DS-7b8008be-2155-4423-82dc-0cbf8e43d0e8,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:00,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/info/beddf7d1d1be4117ab155181131be3fa, entries=9, sequenceid=37, filesize=14.2 K 2023-05-24 16:55:00,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for e42755a5d8bc4a869b6e1bc60d5fa9dd in 49ms, sequenceid=37, compaction requested=true 2023-05-24 16:55:00,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/data/default/TestLogRolling-testLogRollOnDatanodeDeath/e42755a5d8bc4a869b6e1bc60d5fa9dd/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-24 16:55:00,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:55:00,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e42755a5d8bc4a869b6e1bc60d5fa9dd: 2023-05-24 16:55:00,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1684947254597.e42755a5d8bc4a869b6e1bc60d5fa9dd. 2023-05-24 16:55:01,084 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(3303): Received CLOSE for 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:55:01,084 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:55:01,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 3a26b3da7513119af27b6153a0b44b6d, disabling compactions & flushes 2023-05-24 16:55:01,084 DEBUG [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1504): Waiting on 1588230740, 3a26b3da7513119af27b6153a0b44b6d 2023-05-24 16:55:01,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:01,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:01,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:01,084 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. after waiting 0 ms 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 3a26b3da7513119af27b6153a0b44b6d: 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684947253904.3a26b3da7513119af27b6153a0b44b6d. 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:01,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 16:55:01,103 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46003,1684947254500; all regions closed. 2023-05-24 16:55:01,103 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:55:01,112 DEBUG [RS:1;jenkins-hbase20:46003] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/oldWALs 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C46003%2C1684947254500:(num 1684947300816) 2023-05-24 16:55:01,112 DEBUG [RS:1;jenkins-hbase20:46003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:55:01,112 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:55:01,112 INFO [RS:1;jenkins-hbase20:46003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:55:01,113 INFO [RS:1;jenkins-hbase20:46003] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46003 2023-05-24 16:55:01,117 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:55:01,117 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:01,117 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46003,1684947254500 2023-05-24 16:55:01,117 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:01,117 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:01,118 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46003,1684947254500] 2023-05-24 16:55:01,118 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46003,1684947254500; numProcessing=1 2023-05-24 16:55:01,119 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46003,1684947254500 already deleted, retry=false 2023-05-24 16:55:01,119 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46003,1684947254500 expired; onlineServers=1 2023-05-24 16:55:01,284 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-24 16:55:01,284 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36833,1684947253248; all regions closed. 2023-05-24 16:55:01,285 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:55:01,289 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/WALs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:55:01,296 DEBUG [RS:0;jenkins-hbase20:36833] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:01,296 INFO [RS:0;jenkins-hbase20:36833] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:01,297 INFO [RS:0;jenkins-hbase20:36833] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 16:55:01,297 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:55:01,297 INFO [RS:0;jenkins-hbase20:36833] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36833 2023-05-24 16:55:01,299 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36833,1684947253248 2023-05-24 16:55:01,299 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:01,300 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36833,1684947253248] 2023-05-24 16:55:01,300 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36833,1684947253248; numProcessing=2 2023-05-24 16:55:01,301 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36833,1684947253248 already deleted, retry=false 2023-05-24 16:55:01,301 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36833,1684947253248 expired; onlineServers=0 2023-05-24 16:55:01,301 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44841,1684947253207' ***** 2023-05-24 16:55:01,301 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:55:01,301 DEBUG [M:0;jenkins-hbase20:44841] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@207f2912, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:55:01,302 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:55:01,302 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44841,1684947253207; all regions closed. 2023-05-24 16:55:01,302 DEBUG [M:0;jenkins-hbase20:44841] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:01,302 DEBUG [M:0;jenkins-hbase20:44841] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:55:01,302 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:55:01,302 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947253445] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947253445,5,FailOnTimeoutGroup] 2023-05-24 16:55:01,302 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947253444] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947253444,5,FailOnTimeoutGroup] 2023-05-24 16:55:01,302 DEBUG [M:0;jenkins-hbase20:44841] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:55:01,303 INFO [M:0;jenkins-hbase20:44841] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:55:01,303 INFO [M:0;jenkins-hbase20:44841] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:55:01,303 INFO [M:0;jenkins-hbase20:44841] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:55:01,303 DEBUG [M:0;jenkins-hbase20:44841] master.HMaster(1512): Stopping service threads 2023-05-24 16:55:01,303 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:01,303 INFO [M:0;jenkins-hbase20:44841] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:55:01,303 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:01,304 ERROR [M:0;jenkins-hbase20:44841] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 16:55:01,304 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:01,304 INFO [M:0;jenkins-hbase20:44841] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:55:01,304 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:55:01,304 DEBUG [M:0;jenkins-hbase20:44841] zookeeper.ZKUtil(398): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:55:01,305 WARN [M:0;jenkins-hbase20:44841] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:55:01,305 INFO [M:0;jenkins-hbase20:44841] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:55:01,305 INFO [M:0;jenkins-hbase20:44841] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:55:01,305 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:55:01,305 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:01,305 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:01,305 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:55:01,306 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:01,306 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.11 KB heapSize=45.77 KB 2023-05-24 16:55:01,313 WARN [Thread-765] hdfs.DataStreamer(1658): Abandoning BP-609885684-148.251.75.209-1684947252717:blk_1073741873_1055 2023-05-24 16:55:01,314 WARN [Thread-765] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:46351,DS-0e0a15f1-1e7d-4e65-9971-3ca9c8b54e3e,DISK] 2023-05-24 16:55:01,321 INFO [M:0;jenkins-hbase20:44841] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.11 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/75f4e938e289446884fbc09c97d47d28 2023-05-24 16:55:01,326 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/75f4e938e289446884fbc09c97d47d28 as hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/75f4e938e289446884fbc09c97d47d28 2023-05-24 16:55:01,331 INFO [M:0;jenkins-hbase20:44841] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36125/user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/75f4e938e289446884fbc09c97d47d28, entries=11, sequenceid=92, filesize=7.0 K 2023-05-24 16:55:01,332 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegion(2948): Finished flush of dataSize ~38.11 KB/39023, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=92, compaction requested=false 2023-05-24 16:55:01,333 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:01,333 DEBUG [M:0;jenkins-hbase20:44841] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:01,334 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3d092c7a-a581-4082-42ed-7701128e15da/MasterData/WALs/jenkins-hbase20.apache.org,44841,1684947253207 2023-05-24 16:55:01,337 INFO [M:0;jenkins-hbase20:44841] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:55:01,337 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:55:01,338 INFO [M:0;jenkins-hbase20:44841] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44841 2023-05-24 16:55:01,339 DEBUG [M:0;jenkins-hbase20:44841] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44841,1684947253207 already deleted, retry=false 2023-05-24 16:55:01,382 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,382 INFO [RS:1;jenkins-hbase20:46003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46003,1684947254500; zookeeper connection closed. 2023-05-24 16:55:01,382 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1017e64cfeb0005, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,383 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d8073d1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d8073d1 2023-05-24 16:55:01,482 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,482 INFO [M:0;jenkins-hbase20:44841] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44841,1684947253207; zookeeper connection closed. 2023-05-24 16:55:01,482 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): master:44841-0x1017e64cfeb0000, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,524 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:01,581 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@30f36479] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38223, datanodeUuid=f420626f-2eb7-44f6-b6c6-4894c8d4d25e, infoPort=40325, infoSecurePort=0, ipcPort=42905, storageInfo=lv=-57;cid=testClusterID;nsid=569439593;c=1684947252717):Failed to transfer BP-609885684-148.251.75.209-1684947252717:blk_1073741837_1013 to 127.0.0.1:46351 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:01,582 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,582 INFO [RS:0;jenkins-hbase20:36833] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36833,1684947253248; zookeeper connection closed. 2023-05-24 16:55:01,583 DEBUG [Listener at localhost.localdomain/37029-EventThread] zookeeper.ZKWatcher(600): regionserver:36833-0x1017e64cfeb0001, quorum=127.0.0.1:56930, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:01,583 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2afe25c7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2afe25c7 2023-05-24 16:55:01,584 INFO [Listener at localhost.localdomain/42905] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-24 16:55:01,584 WARN [Listener at localhost.localdomain/42905] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:01,588 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:01,695 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:01,695 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid f420626f-2eb7-44f6-b6c6-4894c8d4d25e) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:55:01,696 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data3/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:01,697 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data4/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:01,700 WARN [Listener at localhost.localdomain/42905] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:01,704 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:01,810 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:01,810 WARN [BP-609885684-148.251.75.209-1684947252717 heartbeating to localhost.localdomain/127.0.0.1:36125] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-609885684-148.251.75.209-1684947252717 (Datanode Uuid ce28f0de-27c5-4e44-90cc-e4fb6779eed2) service to localhost.localdomain/127.0.0.1:36125 2023-05-24 16:55:01,812 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data9/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:01,813 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/cluster_91ba38c6-0a32-c9bd-564a-4804b931feb2/dfs/data/data10/current/BP-609885684-148.251.75.209-1684947252717] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:01,825 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:55:01,938 INFO [Listener at localhost.localdomain/42905] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:55:01,979 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:55:01,989 INFO [Listener at localhost.localdomain/42905] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 51) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:36125 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:36125 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42905 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:36125 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:36125 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:36125 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:36125 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:36125 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=467 (was 442) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=134 (was 125) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=9955 (was 10530) 2023-05-24 16:55:01,996 INFO [Listener at localhost.localdomain/42905] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=467, MaxFileDescriptor=60000, SystemLoadAverage=134, ProcessCount=169, AvailableMemoryMB=9954 2023-05-24 16:55:01,997 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:55:01,997 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/hadoop.log.dir so I do NOT create it in target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1 2023-05-24 16:55:01,997 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/0d3f7eb8-aa49-0db5-5711-10fe11a59f1b/hadoop.tmp.dir so I do NOT create it in target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1 2023-05-24 16:55:01,997 INFO [Listener at localhost.localdomain/42905] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea, deleteOnExit=true 2023-05-24 16:55:01,997 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/test.cache.data in system properties and HBase conf 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:55:01,998 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:55:01,999 DEBUG [Listener at localhost.localdomain/42905] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:55:01,999 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:55:01,999 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:55:01,999 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:55:01,999 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:55:02,000 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:55:02,001 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:55:02,001 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:55:02,001 INFO [Listener at localhost.localdomain/42905] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:55:02,002 WARN [Listener at localhost.localdomain/42905] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:55:02,003 WARN [Listener at localhost.localdomain/42905] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:55:02,004 WARN [Listener at localhost.localdomain/42905] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:55:02,029 WARN [Listener at localhost.localdomain/42905] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:02,031 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:02,036 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_localdomain_43639_hdfs____lnp6vr/webapp 2023-05-24 16:55:02,105 INFO [Listener at localhost.localdomain/42905] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43639 2023-05-24 16:55:02,106 WARN [Listener at localhost.localdomain/42905] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:55:02,107 WARN [Listener at localhost.localdomain/42905] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:55:02,108 WARN [Listener at localhost.localdomain/42905] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:55:02,131 WARN [Listener at localhost.localdomain/45333] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:02,141 WARN [Listener at localhost.localdomain/45333] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:02,143 WARN [Listener at localhost.localdomain/45333] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:02,145 INFO [Listener at localhost.localdomain/45333] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:02,149 INFO [Listener at localhost.localdomain/45333] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_34249_datanode____rjmzh/webapp 2023-05-24 16:55:02,220 INFO [Listener at localhost.localdomain/45333] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34249 2023-05-24 16:55:02,227 WARN [Listener at localhost.localdomain/35313] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:02,244 WARN [Listener at localhost.localdomain/35313] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:02,250 WARN [Listener at localhost.localdomain/35313] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:02,251 INFO [Listener at localhost.localdomain/35313] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:02,267 INFO [Listener at localhost.localdomain/35313] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_38493_datanode____45y6nk/webapp 2023-05-24 16:55:02,319 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbe7e313d4770566d: Processing first storage report for DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:02,319 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbe7e313d4770566d: from storage DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 node DatanodeRegistration(127.0.0.1:33525, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=33037, infoSecurePort=0, ipcPort=35313, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:02,319 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbe7e313d4770566d: Processing first storage report for DS-945f7c29-d3d9-4597-8814-9d7522a7593f from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:02,319 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbe7e313d4770566d: from storage DS-945f7c29-d3d9-4597-8814-9d7522a7593f node DatanodeRegistration(127.0.0.1:33525, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=33037, infoSecurePort=0, ipcPort=35313, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:02,348 INFO [Listener at localhost.localdomain/35313] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38493 2023-05-24 16:55:02,356 WARN [Listener at localhost.localdomain/39691] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:02,418 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde72f88eac32b3cd: Processing first storage report for DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:02,418 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde72f88eac32b3cd: from storage DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 node DatanodeRegistration(127.0.0.1:39119, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=33557, infoSecurePort=0, ipcPort=39691, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:02,418 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xde72f88eac32b3cd: Processing first storage report for DS-58220467-e077-46ad-850a-3a0961881946 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:02,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xde72f88eac32b3cd: from storage DS-58220467-e077-46ad-850a-3a0961881946 node DatanodeRegistration(127.0.0.1:39119, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=33557, infoSecurePort=0, ipcPort=39691, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:02,468 DEBUG [Listener at localhost.localdomain/39691] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1 2023-05-24 16:55:02,471 INFO [Listener at localhost.localdomain/39691] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/zookeeper_0, clientPort=63205, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:55:02,473 INFO [Listener at localhost.localdomain/39691] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63205 2023-05-24 16:55:02,473 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,474 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,493 INFO [Listener at localhost.localdomain/39691] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed with version=8 2023-05-24 16:55:02,494 INFO [Listener at localhost.localdomain/39691] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:55:02,495 INFO [Listener at localhost.localdomain/39691] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:55:02,495 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,495 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,496 INFO [Listener at localhost.localdomain/39691] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:55:02,496 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,496 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:55:02,496 INFO [Listener at localhost.localdomain/39691] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:55:02,497 INFO [Listener at localhost.localdomain/39691] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36965 2023-05-24 16:55:02,498 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,499 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,499 INFO [Listener at localhost.localdomain/39691] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36965 connecting to ZooKeeper ensemble=127.0.0.1:63205 2023-05-24 16:55:02,504 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:369650x0, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:55:02,505 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36965-0x1017e6590720000 connected 2023-05-24 16:55:02,515 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:02,515 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:02,516 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:55:02,516 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36965 2023-05-24 16:55:02,516 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36965 2023-05-24 16:55:02,516 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36965 2023-05-24 16:55:02,517 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36965 2023-05-24 16:55:02,517 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36965 2023-05-24 16:55:02,517 INFO [Listener at localhost.localdomain/39691] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed, hbase.cluster.distributed=false 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:02,535 INFO [Listener at localhost.localdomain/39691] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:55:02,536 INFO [Listener at localhost.localdomain/39691] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:55:02,537 INFO [Listener at localhost.localdomain/39691] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36253 2023-05-24 16:55:02,537 INFO [Listener at localhost.localdomain/39691] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:55:02,538 DEBUG [Listener at localhost.localdomain/39691] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:55:02,539 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,540 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,541 INFO [Listener at localhost.localdomain/39691] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36253 connecting to ZooKeeper ensemble=127.0.0.1:63205 2023-05-24 16:55:02,544 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:362530x0, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:55:02,544 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): regionserver:362530x0, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:02,545 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36253-0x1017e6590720001 connected 2023-05-24 16:55:02,545 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:02,546 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ZKUtil(164): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:55:02,546 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36253 2023-05-24 16:55:02,546 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36253 2023-05-24 16:55:02,547 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36253 2023-05-24 16:55:02,547 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36253 2023-05-24 16:55:02,547 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36253 2023-05-24 16:55:02,548 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,549 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:55:02,549 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,550 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:02,550 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:02,550 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,551 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:55:02,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:55:02,551 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,36965,1684947302495 from backup master directory 2023-05-24 16:55:02,552 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,552 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:55:02,552 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:55:02,552 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,565 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/hbase.id with ID: 5d027dd6-11f6-4653-832e-0291a541dad4 2023-05-24 16:55:02,572 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:02,574 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:02,576 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,592 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7b154556 to 127.0.0.1:63205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:02,597 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e4840a5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:02,597 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:02,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:55:02,598 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:02,600 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store-tmp 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:55:02,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:02,613 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:02,613 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:02,614 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,617 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36965%2C1684947302495, suffix=, logDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495, archiveDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/oldWALs, maxLogs=10 2023-05-24 16:55:02,647 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947302618 2023-05-24 16:55:02,647 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] 2023-05-24 16:55:02,647 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:02,648 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:02,648 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,648 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,651 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,657 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:55:02,658 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:55:02,659 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:02,664 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,667 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,673 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:02,683 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:02,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=812502, jitterRate=0.033150166273117065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:55:02,684 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:02,686 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:55:02,688 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:55:02,688 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:55:02,688 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:55:02,689 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 16:55:02,689 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 16:55:02,689 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:55:02,691 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:55:02,692 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:55:02,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:55:02,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:55:02,704 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:55:02,704 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:55:02,705 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:55:02,707 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,708 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:55:02,708 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:55:02,709 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:55:02,712 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:02,712 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,712 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:02,713 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,36965,1684947302495, sessionid=0x1017e6590720000, setting cluster-up flag (Was=false) 2023-05-24 16:55:02,716 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,733 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:55:02,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,737 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:02,741 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:55:02,741 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:02,742 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.hbase-snapshot/.tmp 2023-05-24 16:55:02,750 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(951): ClusterId : 5d027dd6-11f6-4653-832e-0291a541dad4 2023-05-24 16:55:02,752 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:55:02,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:02,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:55:02,754 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,754 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:55:02,754 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:55:02,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947332767 2023-05-24 16:55:02,767 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:55:02,768 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:55:02,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:55:02,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:55:02,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:55:02,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:55:02,769 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:55:02,769 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:02,774 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,776 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:55:02,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:55:02,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:55:02,776 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:55:02,776 DEBUG [RS:0;jenkins-hbase20:36253] zookeeper.ReadOnlyZKClient(139): Connect 0x75a3b4b3 to 127.0.0.1:63205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:02,778 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:02,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:55:02,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:55:02,785 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947302785,5,FailOnTimeoutGroup] 2023-05-24 16:55:02,791 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947302785,5,FailOnTimeoutGroup] 2023-05-24 16:55:02,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:55:02,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,792 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,795 DEBUG [RS:0;jenkins-hbase20:36253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@719c3edf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:02,796 DEBUG [RS:0;jenkins-hbase20:36253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@349ebe88, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:55:02,807 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36253 2023-05-24 16:55:02,808 INFO [RS:0;jenkins-hbase20:36253] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:55:02,808 INFO [RS:0;jenkins-hbase20:36253] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:55:02,808 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:55:02,808 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:02,808 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,36965,1684947302495 with isa=jenkins-hbase20.apache.org/148.251.75.209:36253, startcode=1684947302534 2023-05-24 16:55:02,809 DEBUG [RS:0;jenkins-hbase20:36253] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:55:02,809 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:02,809 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed 2023-05-24 16:55:02,820 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59845, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:55:02,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,822 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed 2023-05-24 16:55:02,822 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:45333 2023-05-24 16:55:02,823 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:55:02,827 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:02,828 DEBUG [RS:0;jenkins-hbase20:36253] zookeeper.ZKUtil(162): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,828 WARN [RS:0;jenkins-hbase20:36253] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:55:02,828 INFO [RS:0;jenkins-hbase20:36253] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:02,828 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,835 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36253,1684947302534] 2023-05-24 16:55:02,838 DEBUG [RS:0;jenkins-hbase20:36253] zookeeper.ZKUtil(162): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,839 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:55:02,839 INFO [RS:0;jenkins-hbase20:36253] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:55:02,846 INFO [RS:0;jenkins-hbase20:36253] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:55:02,847 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:02,854 INFO [RS:0;jenkins-hbase20:36253] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:55:02,854 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:55:02,854 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:55:02,856 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,856 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:55:02,857 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,857 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,857 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,857 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/info 2023-05-24 16:55:02,857 DEBUG [RS:0;jenkins-hbase20:36253] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:02,862 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,862 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,863 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,863 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:55:02,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:02,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:55:02,866 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:02,866 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:55:02,867 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:02,867 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:55:02,869 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/table 2023-05-24 16:55:02,870 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:55:02,871 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:02,872 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740 2023-05-24 16:55:02,872 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740 2023-05-24 16:55:02,874 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:55:02,876 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:55:02,880 INFO [RS:0;jenkins-hbase20:36253] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:55:02,882 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:02,882 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36253,1684947302534-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:02,883 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875652, jitterRate=0.11345036327838898}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:55:02,883 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:55:02,883 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:02,883 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:02,883 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:02,883 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:02,883 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:02,890 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:55:02,890 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:02,892 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:02,892 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:55:02,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:55:02,894 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:55:02,895 INFO [RS:0;jenkins-hbase20:36253] regionserver.Replication(203): jenkins-hbase20.apache.org,36253,1684947302534 started 2023-05-24 16:55:02,895 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36253,1684947302534, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36253, sessionid=0x1017e6590720001 2023-05-24 16:55:02,895 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:55:02,895 DEBUG [RS:0;jenkins-hbase20:36253] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,895 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36253,1684947302534' 2023-05-24 16:55:02,895 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:55:02,895 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36253,1684947302534' 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:55:02,896 DEBUG [RS:0;jenkins-hbase20:36253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:55:02,897 DEBUG [RS:0;jenkins-hbase20:36253] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:55:02,897 INFO [RS:0;jenkins-hbase20:36253] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:55:02,897 INFO [RS:0;jenkins-hbase20:36253] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:55:02,999 INFO [RS:0;jenkins-hbase20:36253] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36253%2C1684947302534, suffix=, logDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534, archiveDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/oldWALs, maxLogs=32 2023-05-24 16:55:03,013 INFO [RS:0;jenkins-hbase20:36253] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 2023-05-24 16:55:03,013 DEBUG [RS:0;jenkins-hbase20:36253] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] 2023-05-24 16:55:03,046 DEBUG [jenkins-hbase20:36965] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:55:03,047 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36253,1684947302534, state=OPENING 2023-05-24 16:55:03,048 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:55:03,049 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:03,049 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36253,1684947302534}] 2023-05-24 16:55:03,049 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:55:03,205 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:03,205 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:55:03,208 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:55:03,212 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:55:03,213 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:03,215 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534, archiveDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/oldWALs, maxLogs=32 2023-05-24 16:55:03,228 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947303216.meta 2023-05-24 16:55:03,228 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] 2023-05-24 16:55:03,228 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:03,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:55:03,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:55:03,229 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:55:03,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:55:03,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:03,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:55:03,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:55:03,235 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:55:03,236 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/info 2023-05-24 16:55:03,236 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/info 2023-05-24 16:55:03,237 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:55:03,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:03,238 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:55:03,239 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:03,239 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:03,239 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:55:03,240 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:03,240 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:55:03,241 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/table 2023-05-24 16:55:03,241 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/table 2023-05-24 16:55:03,242 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:55:03,245 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:03,246 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740 2023-05-24 16:55:03,248 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740 2023-05-24 16:55:03,250 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:55:03,251 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:55:03,252 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=880403, jitterRate=0.11949101090431213}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:55:03,252 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:55:03,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947303205 2023-05-24 16:55:03,258 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:55:03,259 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:55:03,260 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,36253,1684947302534, state=OPEN 2023-05-24 16:55:03,261 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:55:03,261 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:55:03,265 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:55:03,265 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,36253,1684947302534 in 212 msec 2023-05-24 16:55:03,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:55:03,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-05-24 16:55:03,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 519 msec 2023-05-24 16:55:03,270 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947303270, completionTime=-1 2023-05-24 16:55:03,270 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:55:03,270 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:55:03,273 DEBUG [hconnection-0x6703d8c5-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:55:03,275 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42980, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:55:03,277 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:55:03,277 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947363277 2023-05-24 16:55:03,277 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947423277 2023-05-24 16:55:03,277 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36965,1684947302495-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36965,1684947302495-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36965,1684947302495-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:36965, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:03,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:55:03,291 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:03,292 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:55:03,293 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:55:03,297 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:55:03,299 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:55:03,302 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,303 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79 empty. 2023-05-24 16:55:03,303 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,303 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:55:03,322 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:03,323 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fc47c6cca35b3f44b522fc433babbb79, NAME => 'hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp 2023-05-24 16:55:03,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:03,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fc47c6cca35b3f44b522fc433babbb79, disabling compactions & flushes 2023-05-24 16:55:03,336 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. after waiting 0 ms 2023-05-24 16:55:03,336 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,336 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,337 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fc47c6cca35b3f44b522fc433babbb79: 2023-05-24 16:55:03,339 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:55:03,340 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947303340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947303340"}]},"ts":"1684947303340"} 2023-05-24 16:55:03,342 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:55:03,343 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:55:03,344 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947303343"}]},"ts":"1684947303343"} 2023-05-24 16:55:03,345 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:55:03,349 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fc47c6cca35b3f44b522fc433babbb79, ASSIGN}] 2023-05-24 16:55:03,352 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fc47c6cca35b3f44b522fc433babbb79, ASSIGN 2023-05-24 16:55:03,353 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fc47c6cca35b3f44b522fc433babbb79, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36253,1684947302534; forceNewPlan=false, retain=false 2023-05-24 16:55:03,504 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fc47c6cca35b3f44b522fc433babbb79, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:03,505 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947303504"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947303504"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947303504"}]},"ts":"1684947303504"} 2023-05-24 16:55:03,507 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fc47c6cca35b3f44b522fc433babbb79, server=jenkins-hbase20.apache.org,36253,1684947302534}] 2023-05-24 16:55:03,664 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,664 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc47c6cca35b3f44b522fc433babbb79, NAME => 'hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:03,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:03,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,665 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,668 INFO [StoreOpener-fc47c6cca35b3f44b522fc433babbb79-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,670 DEBUG [StoreOpener-fc47c6cca35b3f44b522fc433babbb79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79/info 2023-05-24 16:55:03,670 DEBUG [StoreOpener-fc47c6cca35b3f44b522fc433babbb79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79/info 2023-05-24 16:55:03,671 INFO [StoreOpener-fc47c6cca35b3f44b522fc433babbb79-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc47c6cca35b3f44b522fc433babbb79 columnFamilyName info 2023-05-24 16:55:03,672 INFO [StoreOpener-fc47c6cca35b3f44b522fc433babbb79-1] regionserver.HStore(310): Store=fc47c6cca35b3f44b522fc433babbb79/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:03,672 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,673 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:03,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/namespace/fc47c6cca35b3f44b522fc433babbb79/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:03,678 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened fc47c6cca35b3f44b522fc433babbb79; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=711263, jitterRate=-0.095582515001297}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:55:03,678 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for fc47c6cca35b3f44b522fc433babbb79: 2023-05-24 16:55:03,681 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79., pid=6, masterSystemTime=1684947303660 2023-05-24 16:55:03,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,685 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:03,686 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fc47c6cca35b3f44b522fc433babbb79, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:03,686 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947303685"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947303685"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947303685"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947303685"}]},"ts":"1684947303685"} 2023-05-24 16:55:03,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:55:03,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fc47c6cca35b3f44b522fc433babbb79, server=jenkins-hbase20.apache.org,36253,1684947302534 in 181 msec 2023-05-24 16:55:03,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:55:03,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fc47c6cca35b3f44b522fc433babbb79, ASSIGN in 342 msec 2023-05-24 16:55:03,695 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:55:03,695 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947303695"}]},"ts":"1684947303695"} 2023-05-24 16:55:03,697 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:55:03,699 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:55:03,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 408 msec 2023-05-24 16:55:03,795 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:55:03,796 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:03,796 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:03,800 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:55:03,811 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:03,817 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-05-24 16:55:03,823 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:55:03,834 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:03,838 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-05-24 16:55:03,848 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:55:03,849 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:55:03,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.297sec 2023-05-24 16:55:03,849 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:55:03,850 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:55:03,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:55:03,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36965,1684947302495-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:55:03,851 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36965,1684947302495-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:55:03,853 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:55:03,950 DEBUG [Listener at localhost.localdomain/39691] zookeeper.ReadOnlyZKClient(139): Connect 0x00dc4465 to 127.0.0.1:63205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:03,954 DEBUG [Listener at localhost.localdomain/39691] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29f0c813, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:03,956 DEBUG [hconnection-0x33961d84-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:55:03,958 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42990, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:55:03,960 INFO [Listener at localhost.localdomain/39691] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:03,960 INFO [Listener at localhost.localdomain/39691] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:03,975 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:55:03,975 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:03,975 INFO [Listener at localhost.localdomain/39691] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:55:03,976 INFO [Listener at localhost.localdomain/39691] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-24 16:55:03,976 INFO [Listener at localhost.localdomain/39691] wal.TestLogRolling(432): Replication=2 2023-05-24 16:55:03,978 DEBUG [Listener at localhost.localdomain/39691] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 16:55:03,981 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 16:55:03,982 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 16:55:03,983 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 16:55:03,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:03,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-24 16:55:03,988 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:55:03,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-24 16:55:03,989 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:55:03,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:55:03,991 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:03,992 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8 empty. 2023-05-24 16:55:03,993 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:03,993 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-24 16:55:04,007 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:04,008 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5b9522e47a6ab29836238806d69cfab8, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/.tmp 2023-05-24 16:55:04,016 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:04,016 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 5b9522e47a6ab29836238806d69cfab8, disabling compactions & flushes 2023-05-24 16:55:04,016 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,016 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,016 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. after waiting 0 ms 2023-05-24 16:55:04,017 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,017 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,017 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 5b9522e47a6ab29836238806d69cfab8: 2023-05-24 16:55:04,019 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:55:04,020 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684947304020"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947304020"}]},"ts":"1684947304020"} 2023-05-24 16:55:04,022 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:55:04,023 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:55:04,023 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947304023"}]},"ts":"1684947304023"} 2023-05-24 16:55:04,025 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-24 16:55:04,027 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=5b9522e47a6ab29836238806d69cfab8, ASSIGN}] 2023-05-24 16:55:04,029 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=5b9522e47a6ab29836238806d69cfab8, ASSIGN 2023-05-24 16:55:04,031 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=5b9522e47a6ab29836238806d69cfab8, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36253,1684947302534; forceNewPlan=false, retain=false 2023-05-24 16:55:04,182 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5b9522e47a6ab29836238806d69cfab8, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:04,182 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684947304182"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947304182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947304182"}]},"ts":"1684947304182"} 2023-05-24 16:55:04,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5b9522e47a6ab29836238806d69cfab8, server=jenkins-hbase20.apache.org,36253,1684947302534}] 2023-05-24 16:55:04,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5b9522e47a6ab29836238806d69cfab8, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:04,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:04,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,344 INFO [StoreOpener-5b9522e47a6ab29836238806d69cfab8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,346 DEBUG [StoreOpener-5b9522e47a6ab29836238806d69cfab8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8/info 2023-05-24 16:55:04,346 DEBUG [StoreOpener-5b9522e47a6ab29836238806d69cfab8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8/info 2023-05-24 16:55:04,346 INFO [StoreOpener-5b9522e47a6ab29836238806d69cfab8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5b9522e47a6ab29836238806d69cfab8 columnFamilyName info 2023-05-24 16:55:04,347 INFO [StoreOpener-5b9522e47a6ab29836238806d69cfab8-1] regionserver.HStore(310): Store=5b9522e47a6ab29836238806d69cfab8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:04,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:04,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/default/TestLogRolling-testLogRollOnPipelineRestart/5b9522e47a6ab29836238806d69cfab8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:04,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5b9522e47a6ab29836238806d69cfab8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=756251, jitterRate=-0.038378000259399414}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:55:04,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5b9522e47a6ab29836238806d69cfab8: 2023-05-24 16:55:04,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8., pid=11, masterSystemTime=1684947304337 2023-05-24 16:55:04,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:04,357 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5b9522e47a6ab29836238806d69cfab8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:04,357 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684947304357"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947304357"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947304357"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947304357"}]},"ts":"1684947304357"} 2023-05-24 16:55:04,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 16:55:04,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5b9522e47a6ab29836238806d69cfab8, server=jenkins-hbase20.apache.org,36253,1684947302534 in 175 msec 2023-05-24 16:55:04,367 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 16:55:04,367 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=5b9522e47a6ab29836238806d69cfab8, ASSIGN in 335 msec 2023-05-24 16:55:04,368 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:55:04,368 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947304368"}]},"ts":"1684947304368"} 2023-05-24 16:55:04,370 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-24 16:55:04,373 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:55:04,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 390 msec 2023-05-24 16:55:06,502 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:55:08,840 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-24 16:55:13,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:55:13,991 INFO [Listener at localhost.localdomain/39691] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-24 16:55:13,994 DEBUG [Listener at localhost.localdomain/39691] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-24 16:55:13,994 DEBUG [Listener at localhost.localdomain/39691] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:16,000 INFO [Listener at localhost.localdomain/39691] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 2023-05-24 16:55:16,000 WARN [Listener at localhost.localdomain/39691] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:16,002 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:16,003 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947303216.meta block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]) is bad. 2023-05-24 16:55:16,003 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 16:55:16,003 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947302618 block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]) is bad. 2023-05-24 16:55:16,003 WARN [PacketResponder: BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39119]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,011 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 16:55:16,012 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_448077192_17 at /127.0.0.1:46440 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46440 dst: /127.0.0.1:33525 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:406) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,012 WARN [PacketResponder: BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:39119]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,012 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:39119,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]) is bad. 2023-05-24 16:55:16,017 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:46468 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46468 dst: /127.0.0.1:33525 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,026 INFO [Listener at localhost.localdomain/39691] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:16,027 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:46480 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46480 dst: /127.0.0.1:33525 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33525 remote=/127.0.0.1:46480]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,027 WARN [PacketResponder: BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33525]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,038 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:46032 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:39119:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46032 dst: /127.0.0.1:39119 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,128 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:46028 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:39119:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46028 dst: /127.0.0.1:39119 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,129 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:16,128 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_448077192_17 at /127.0.0.1:46002 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:39119:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46002 dst: /127.0.0.1:39119 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,130 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid ca21e474-cc65-4919-8e09-03e5a82360b6) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:16,131 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data3/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:16,131 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data4/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:16,137 WARN [Listener at localhost.localdomain/39691] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:16,139 WARN [Listener at localhost.localdomain/39691] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:16,140 INFO [Listener at localhost.localdomain/39691] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:16,146 INFO [Listener at localhost.localdomain/39691] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_35523_datanode____.t3v62t/webapp 2023-05-24 16:55:16,230 INFO [Listener at localhost.localdomain/39691] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35523 2023-05-24 16:55:16,238 WARN [Listener at localhost.localdomain/35755] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:16,243 WARN [Listener at localhost.localdomain/35755] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:16,244 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:16,244 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:16,244 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:16,248 INFO [Listener at localhost.localdomain/35755] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:16,296 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xed8dabb0b6fb17be: Processing first storage report for DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:16,297 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xed8dabb0b6fb17be: from storage DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 node DatanodeRegistration(127.0.0.1:40089, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=35595, infoSecurePort=0, ipcPort=35755, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:16,297 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xed8dabb0b6fb17be: Processing first storage report for DS-58220467-e077-46ad-850a-3a0961881946 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:16,297 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xed8dabb0b6fb17be: from storage DS-58220467-e077-46ad-850a-3a0961881946 node DatanodeRegistration(127.0.0.1:40089, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=35595, infoSecurePort=0, ipcPort=35755, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:16,319 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid 78296540-a500-4023-b68e-15ca1144a2eb) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:16,320 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:16,320 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data2/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:16,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_448077192_17 at /127.0.0.1:49432 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49432 dst: /127.0.0.1:33525 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,352 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:49406 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49406 dst: /127.0.0.1:33525 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:49416 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33525:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49416 dst: /127.0.0.1:33525 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:16,360 WARN [Listener at localhost.localdomain/35755] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:16,362 WARN [Listener at localhost.localdomain/35755] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:16,363 INFO [Listener at localhost.localdomain/35755] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:16,369 INFO [Listener at localhost.localdomain/35755] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_46377_datanode____ftagag/webapp 2023-05-24 16:55:16,459 INFO [Listener at localhost.localdomain/35755] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46377 2023-05-24 16:55:16,466 WARN [Listener at localhost.localdomain/46363] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:16,531 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9d59632c9fa6587f: Processing first storage report for DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:16,532 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9d59632c9fa6587f: from storage DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 node DatanodeRegistration(127.0.0.1:38689, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=37797, infoSecurePort=0, ipcPort=46363, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:16,532 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9d59632c9fa6587f: Processing first storage report for DS-945f7c29-d3d9-4597-8814-9d7522a7593f from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:16,532 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9d59632c9fa6587f: from storage DS-945f7c29-d3d9-4597-8814-9d7522a7593f node DatanodeRegistration(127.0.0.1:38689, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=37797, infoSecurePort=0, ipcPort=46363, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:17,473 INFO [Listener at localhost.localdomain/46363] wal.TestLogRolling(481): Data Nodes restarted 2023-05-24 16:55:17,476 INFO [Listener at localhost.localdomain/46363] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-24 16:55:17,478 WARN [RS:0;jenkins-hbase20:36253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:17,480 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36253%2C1684947302534:(num 1684947303000) roll requested 2023-05-24 16:55:17,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36253] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:17,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36253] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:42990 deadline: 1684947327478, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-24 16:55:17,492 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 newFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 2023-05-24 16:55:17,492 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-24 16:55:17,492 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 2023-05-24 16:55:17,493 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:38689,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] 2023-05-24 16:55:17,493 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:17,493 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 is not closed yet, will try archiving it next time 2023-05-24 16:55:17,493 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:29,544 INFO [Listener at localhost.localdomain/46363] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-24 16:55:31,548 WARN [Listener at localhost.localdomain/46363] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:31,552 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:38689,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 16:55:31,555 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:38689,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38689,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]) is bad. 2023-05-24 16:55:31,555 WARN [PacketResponder: BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38689]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:31,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:50672 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40089:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50672 dst: /127.0.0.1:40089 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:31,564 INFO [Listener at localhost.localdomain/46363] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:31,671 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:41862 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38689:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41862 dst: /127.0.0.1:38689 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:31,673 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:31,674 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid 78296540-a500-4023-b68e-15ca1144a2eb) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:31,674 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:31,675 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data2/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:31,682 WARN [Listener at localhost.localdomain/46363] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:31,684 WARN [Listener at localhost.localdomain/46363] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:31,685 INFO [Listener at localhost.localdomain/46363] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:31,693 INFO [Listener at localhost.localdomain/46363] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_37025_datanode____wqebxy/webapp 2023-05-24 16:55:31,769 INFO [Listener at localhost.localdomain/46363] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37025 2023-05-24 16:55:31,777 WARN [Listener at localhost.localdomain/33633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:31,781 WARN [Listener at localhost.localdomain/33633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:31,781 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:31,784 INFO [Listener at localhost.localdomain/33633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:31,836 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d64dd3fd7eea817: Processing first storage report for DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:31,837 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d64dd3fd7eea817: from storage DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4 node DatanodeRegistration(127.0.0.1:46247, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=42987, infoSecurePort=0, ipcPort=33633, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:31,837 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d64dd3fd7eea817: Processing first storage report for DS-945f7c29-d3d9-4597-8814-9d7522a7593f from datanode 78296540-a500-4023-b68e-15ca1144a2eb 2023-05-24 16:55:31,837 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d64dd3fd7eea817: from storage DS-945f7c29-d3d9-4597-8814-9d7522a7593f node DatanodeRegistration(127.0.0.1:46247, datanodeUuid=78296540-a500-4023-b68e-15ca1144a2eb, infoPort=42987, infoSecurePort=0, ipcPort=33633, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:31,891 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1159899822_17 at /127.0.0.1:43686 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40089:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43686 dst: /127.0.0.1:40089 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:31,892 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:31,893 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid ca21e474-cc65-4919-8e09-03e5a82360b6) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:31,894 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data3/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:31,894 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data4/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:31,904 WARN [Listener at localhost.localdomain/33633] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:31,907 WARN [Listener at localhost.localdomain/33633] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:31,908 INFO [Listener at localhost.localdomain/33633] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:31,914 INFO [Listener at localhost.localdomain/33633] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/java.io.tmpdir/Jetty_localhost_37711_datanode____lxp3bu/webapp 2023-05-24 16:55:31,991 INFO [Listener at localhost.localdomain/33633] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37711 2023-05-24 16:55:32,000 WARN [Listener at localhost.localdomain/45603] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:32,053 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3da6c2488531407f: Processing first storage report for DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:32,054 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3da6c2488531407f: from storage DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151 node DatanodeRegistration(127.0.0.1:45025, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=32897, infoSecurePort=0, ipcPort=45603, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:32,054 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3da6c2488531407f: Processing first storage report for DS-58220467-e077-46ad-850a-3a0961881946 from datanode ca21e474-cc65-4919-8e09-03e5a82360b6 2023-05-24 16:55:32,054 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3da6c2488531407f: from storage DS-58220467-e077-46ad-850a-3a0961881946 node DatanodeRegistration(127.0.0.1:45025, datanodeUuid=ca21e474-cc65-4919-8e09-03e5a82360b6, infoPort=32897, infoSecurePort=0, ipcPort=45603, storageInfo=lv=-57;cid=testClusterID;nsid=430356698;c=1684947302005), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 16:55:32,770 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:32,771 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36965%2C1684947302495:(num 1684947302618) roll requested 2023-05-24 16:55:32,771 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:32,773 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:32,787 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-24 16:55:32,787 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947302618 with entries=88, filesize=43.82 KB; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947332771 2023-05-24 16:55:32,787 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45025,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] 2023-05-24 16:55:32,788 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947302618 is not closed yet, will try archiving it next time 2023-05-24 16:55:32,788 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:32,788 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495/jenkins-hbase20.apache.org%2C36965%2C1684947302495.1684947302618; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:33,004 INFO [Listener at localhost.localdomain/45603] wal.TestLogRolling(498): Data Nodes restarted 2023-05-24 16:55:33,006 INFO [Listener at localhost.localdomain/45603] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-24 16:55:33,007 WARN [RS:0;jenkins-hbase20:36253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:33,008 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36253%2C1684947302534:(num 1684947317480) roll requested 2023-05-24 16:55:33,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36253] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:33,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36253] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:42990 deadline: 1684947343007, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-24 16:55:33,021 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 newFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 2023-05-24 16:55:33,021 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-24 16:55:33,022 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 2023-05-24 16:55:33,022 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] 2023-05-24 16:55:33,022 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:33,022 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 is not closed yet, will try archiving it next time 2023-05-24 16:55:33,022 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40089,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:45,104 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 newFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 2023-05-24 16:55:45,105 INFO [Listener at localhost.localdomain/45603] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 2023-05-24 16:55:45,109 DEBUG [Listener at localhost.localdomain/45603] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]] 2023-05-24 16:55:45,109 DEBUG [Listener at localhost.localdomain/45603] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 is not closed yet, will try archiving it next time 2023-05-24 16:55:45,110 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 2023-05-24 16:55:45,111 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 2023-05-24 16:55:45,113 WARN [IPC Server handler 1 on default port 45333] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-05-24 16:55:45,116 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 after 5ms 2023-05-24 16:55:46,118 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@3d39f7df] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-57616634-148.251.75.209-1684947302005:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:45025,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data4/current/BP-57616634-148.251.75.209-1684947302005/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:49,117 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 after 4006ms 2023-05-24 16:55:49,117 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947303000 2023-05-24 16:55:49,135 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684947303678/Put/vlen=176/seqid=0] 2023-05-24 16:55:49,135 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #4: [default/info:d/1684947303806/Put/vlen=9/seqid=0] 2023-05-24 16:55:49,135 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #5: [hbase/info:d/1684947303830/Put/vlen=7/seqid=0] 2023-05-24 16:55:49,135 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684947304353/Put/vlen=232/seqid=0] 2023-05-24 16:55:49,136 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #4: [row1002/info:/1684947313998/Put/vlen=1045/seqid=0] 2023-05-24 16:55:49,136 DEBUG [Listener at localhost.localdomain/45603] wal.ProtobufLogReader(420): EOF at position 2162 2023-05-24 16:55:49,136 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 2023-05-24 16:55:49,136 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 2023-05-24 16:55:49,137 WARN [IPC Server handler 0 on default port 45333] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-24 16:55:49,137 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 after 1ms 2023-05-24 16:55:50,095 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@74b16c1c] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-57616634-148.251.75.209-1684947302005:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:46247,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current/BP-57616634-148.251.75.209-1684947302005/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current/BP-57616634-148.251.75.209-1684947302005/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-24 16:55:53,139 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 after 4003ms 2023-05-24 16:55:53,139 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947317480 2023-05-24 16:55:53,147 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #6: [row1003/info:/1684947327539/Put/vlen=1045/seqid=0] 2023-05-24 16:55:53,148 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #7: [row1004/info:/1684947329545/Put/vlen=1045/seqid=0] 2023-05-24 16:55:53,148 DEBUG [Listener at localhost.localdomain/45603] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-24 16:55:53,148 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 2023-05-24 16:55:53,148 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 2023-05-24 16:55:53,149 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 after 1ms 2023-05-24 16:55:53,149 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947333008 2023-05-24 16:55:53,153 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(522): #9: [row1005/info:/1684947343034/Put/vlen=1045/seqid=0] 2023-05-24 16:55:53,154 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 2023-05-24 16:55:53,154 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 2023-05-24 16:55:53,154 WARN [IPC Server handler 3 on default port 45333] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-24 16:55:53,155 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 after 0ms 2023-05-24 16:55:54,093 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_448077192_17 at /127.0.0.1:59398 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:46247:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59398 dst: /127.0.0.1:46247 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:46247 remote=/127.0.0.1:59398]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:54,094 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_448077192_17 at /127.0.0.1:41562 [Receiving block BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:45025:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41562 dst: /127.0.0.1:45025 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:54,094 WARN [ResponseProcessor for block BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 16:55:54,095 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 block BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK], DatanodeInfoWithStorage[127.0.0.1:45025,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]) is bad. 2023-05-24 16:55:54,102 WARN [DataStreamer for file /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 block BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,156 INFO [Listener at localhost.localdomain/45603] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 after 4002ms 2023-05-24 16:55:57,156 DEBUG [Listener at localhost.localdomain/45603] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 2023-05-24 16:55:57,166 DEBUG [Listener at localhost.localdomain/45603] wal.ProtobufLogReader(420): EOF at position 83 2023-05-24 16:55:57,168 INFO [Listener at localhost.localdomain/45603] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-05-24 16:55:57,168 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,168 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta:.meta(num 1684947303216) roll requested 2023-05-24 16:55:57,168 DEBUG [Listener at localhost.localdomain/45603] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 16:55:57,168 INFO [Listener at localhost.localdomain/45603] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,169 INFO [Listener at localhost.localdomain/45603] regionserver.HRegion(2745): Flushing 5b9522e47a6ab29836238806d69cfab8 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-24 16:55:57,170 WARN [RS:0;jenkins-hbase20:36253.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,171 DEBUG [Listener at localhost.localdomain/45603] regionserver.HRegion(2446): Flush status journal for 5b9522e47a6ab29836238806d69cfab8: 2023-05-24 16:55:57,171 INFO [Listener at localhost.localdomain/45603] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,172 INFO [Listener at localhost.localdomain/45603] regionserver.HRegion(2745): Flushing fc47c6cca35b3f44b522fc433babbb79 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:55:57,172 DEBUG [Listener at localhost.localdomain/45603] regionserver.HRegion(2446): Flush status journal for fc47c6cca35b3f44b522fc433babbb79: 2023-05-24 16:55:57,173 INFO [Listener at localhost.localdomain/45603] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,175 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:55:57,175 INFO [Listener at localhost.localdomain/45603] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 16:55:57,175 DEBUG [Listener at localhost.localdomain/45603] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00dc4465 to 127.0.0.1:63205 2023-05-24 16:55:57,175 DEBUG [Listener at localhost.localdomain/45603] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:57,175 DEBUG [Listener at localhost.localdomain/45603] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:55:57,181 DEBUG [Listener at localhost.localdomain/45603] util.JVMClusterUtil(257): Found active master hash=1408257684, stopped=false 2023-05-24 16:55:57,181 INFO [Listener at localhost.localdomain/45603] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:57,183 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:57,183 INFO [Listener at localhost.localdomain/45603] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:55:57,183 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:57,183 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:57,183 DEBUG [Listener at localhost.localdomain/45603] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b154556 to 127.0.0.1:63205 2023-05-24 16:55:57,183 DEBUG [Listener at localhost.localdomain/45603] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:57,183 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:57,184 INFO [Listener at localhost.localdomain/45603] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36253,1684947302534' ***** 2023-05-24 16:55:57,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:57,184 INFO [Listener at localhost.localdomain/45603] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:55:57,184 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-24 16:55:57,184 INFO [RS:0;jenkins-hbase20:36253] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:55:57,184 INFO [RS:0;jenkins-hbase20:36253] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:55:57,184 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:55:57,184 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947303216.meta with entries=11, filesize=3.72 KB; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947357168.meta 2023-05-24 16:55:57,184 INFO [RS:0;jenkins-hbase20:36253] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:55:57,185 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:57,185 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45025,DS-10d5f1e0-3d3a-48f1-9a3b-f0c65c284151,DISK], DatanodeInfoWithStorage[127.0.0.1:46247,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] 2023-05-24 16:55:57,185 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947303216.meta is not closed yet, will try archiving it next time 2023-05-24 16:55:57,185 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,185 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C36253%2C1684947302534:(num 1684947345039) roll requested 2023-05-24 16:55:57,186 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:57,186 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.meta.1684947303216.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33525,DS-c04a3c3a-d954-4b98-b3bb-79a7a1fc86e4,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,186 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:57,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5b9522e47a6ab29836238806d69cfab8, disabling compactions & flushes 2023-05-24 16:55:57,187 DEBUG [RS:0;jenkins-hbase20:36253] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75a3b4b3 to 127.0.0.1:63205 2023-05-24 16:55:57,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,187 DEBUG [RS:0;jenkins-hbase20:36253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:57,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,187 INFO [RS:0;jenkins-hbase20:36253] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:55:57,187 INFO [RS:0;jenkins-hbase20:36253] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:55:57,187 INFO [RS:0;jenkins-hbase20:36253] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:55:57,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. after waiting 0 ms 2023-05-24 16:55:57,187 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:55:57,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,187 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 16:55:57,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 5b9522e47a6ab29836238806d69cfab8 1/1 column families, dataSize=4.20 KB heapSize=4.98 KB 2023-05-24 16:55:57,187 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:57,187 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 5b9522e47a6ab29836238806d69cfab8=TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8., fc47c6cca35b3f44b522fc433babbb79=hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79.} 2023-05-24 16:55:57,187 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:57,187 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:57,187 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1504): Waiting on 1588230740, 5b9522e47a6ab29836238806d69cfab8, fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:57,187 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-05-24 16:55:57,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:57,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:57,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5b9522e47a6ab29836238806d69cfab8: 2023-05-24 16:55:57,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.95 KB 2023-05-24 16:55:57,188 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,36253,1684947302534: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,188 WARN [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-05-24 16:55:57,189 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-24 16:55:57,189 WARN [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultMemStore(90): Snapshot called again without clearing previous. Doing nothing. Another ongoing flush or did we fail last attempt? 2023-05-24 16:55:57,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-24 16:55:57,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-24 16:55:57,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-24 16:55:57,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-24 16:55:57,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1125646336, "init": 524288000, "max": 2051014656, "used": 321353400 }, "NonHeapMemoryUsage": { "committed": 139419648, "init": 2555904, "max": -1, "used": 136876720 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-24 16:55:57,191 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36965] master.MasterRpcServices(609): jenkins-hbase20.apache.org,36253,1684947302534 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,36253,1684947302534: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fc47c6cca35b3f44b522fc433babbb79, disabling compactions & flushes 2023-05-24 16:55:57,191 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. after waiting 0 ms 2023-05-24 16:55:57,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fc47c6cca35b3f44b522fc433babbb79: 2023-05-24 16:55:57,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,196 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 newFile=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947357185 2023-05-24 16:55:57,196 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-24 16:55:57,196 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947357185 2023-05-24 16:55:57,196 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,196 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039 failed. Cause="Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-24 16:55:57,196 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,197 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534/jenkins-hbase20.apache.org%2C36253%2C1684947302534.1684947345039, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-57616634-148.251.75.209-1684947302005:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 16:55:57,198 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:57,198 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:57,199 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/WALs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:57,208 DEBUG [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-05-24 16:55:57,209 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.72 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/.tmp/info/2ca6c6c41f7c4c7788827d4f16be9083 2023-05-24 16:55:57,223 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=244 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/.tmp/table/e710d22745a94113beb12de887ccc846 2023-05-24 16:55:57,229 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/.tmp/info/2ca6c6c41f7c4c7788827d4f16be9083 as hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/info/2ca6c6c41f7c4c7788827d4f16be9083 2023-05-24 16:55:57,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/info/2ca6c6c41f7c4c7788827d4f16be9083, entries=20, sequenceid=16, filesize=7.4 K 2023-05-24 16:55:57,234 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/.tmp/table/e710d22745a94113beb12de887ccc846 as hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/table/e710d22745a94113beb12de887ccc846 2023-05-24 16:55:57,240 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/data/hbase/meta/1588230740/table/e710d22745a94113beb12de887ccc846, entries=4, sequenceid=16, filesize=4.8 K 2023-05-24 16:55:57,240 WARN [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2895): 1588230740 : failed writing ABORT_FLUSH marker to WAL java.io.IOException: Cannot append; log is closed, regionName = hbase:meta,,1.1588230740 at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2893) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2580) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-24 16:55:57,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Replay of WAL required. Forcing server shutdown 2023-05-24 16:55:57,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:57,240 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 16:55:57,388 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:55:57,388 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for 5b9522e47a6ab29836238806d69cfab8 2023-05-24 16:55:57,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:57,388 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(3303): Received CLOSE for fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:57,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5b9522e47a6ab29836238806d69cfab8, disabling compactions & flushes 2023-05-24 16:55:57,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. after waiting 0 ms 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,388 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5b9522e47a6ab29836238806d69cfab8: 2023-05-24 16:55:57,388 DEBUG [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1504): Waiting on 1588230740, 5b9522e47a6ab29836238806d69cfab8, fc47c6cca35b3f44b522fc433babbb79 2023-05-24 16:55:57,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684947303982.5b9522e47a6ab29836238806d69cfab8. 2023-05-24 16:55:57,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fc47c6cca35b3f44b522fc433babbb79, disabling compactions & flushes 2023-05-24 16:55:57,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. after waiting 0 ms 2023-05-24 16:55:57,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fc47c6cca35b3f44b522fc433babbb79: 2023-05-24 16:55:57,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684947303290.fc47c6cca35b3f44b522fc433babbb79. 2023-05-24 16:55:57,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:57,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 16:55:57,589 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-24 16:55:57,590 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36253,1684947302534; all regions closed. 2023-05-24 16:55:57,590 DEBUG [RS:0;jenkins-hbase20:36253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:57,590 INFO [RS:0;jenkins-hbase20:36253] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:57,590 INFO [RS:0;jenkins-hbase20:36253] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 16:55:57,590 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:55:57,591 INFO [RS:0;jenkins-hbase20:36253] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36253 2023-05-24 16:55:57,593 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:57,593 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36253,1684947302534 2023-05-24 16:55:57,593 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:57,593 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36253,1684947302534] 2023-05-24 16:55:57,594 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36253,1684947302534; numProcessing=1 2023-05-24 16:55:57,594 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36253,1684947302534 already deleted, retry=false 2023-05-24 16:55:57,594 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36253,1684947302534 expired; onlineServers=0 2023-05-24 16:55:57,594 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,36965,1684947302495' ***** 2023-05-24 16:55:57,594 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:55:57,595 DEBUG [M:0;jenkins-hbase20:36965] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57cfc032, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:55:57,595 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:57,595 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36965,1684947302495; all regions closed. 2023-05-24 16:55:57,595 DEBUG [M:0;jenkins-hbase20:36965] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:55:57,595 DEBUG [M:0;jenkins-hbase20:36965] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:55:57,595 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:55:57,595 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947302785] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947302785,5,FailOnTimeoutGroup] 2023-05-24 16:55:57,595 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947302785] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947302785,5,FailOnTimeoutGroup] 2023-05-24 16:55:57,595 DEBUG [M:0;jenkins-hbase20:36965] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:55:57,597 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:57,597 INFO [M:0;jenkins-hbase20:36965] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:55:57,597 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:57,597 INFO [M:0;jenkins-hbase20:36965] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:55:57,597 INFO [M:0;jenkins-hbase20:36965] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:55:57,597 DEBUG [M:0;jenkins-hbase20:36965] master.HMaster(1512): Stopping service threads 2023-05-24 16:55:57,597 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:57,597 INFO [M:0;jenkins-hbase20:36965] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:55:57,597 ERROR [M:0;jenkins-hbase20:36965] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 16:55:57,597 INFO [M:0;jenkins-hbase20:36965] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:55:57,598 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:55:57,598 DEBUG [M:0;jenkins-hbase20:36965] zookeeper.ZKUtil(398): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:55:57,598 WARN [M:0;jenkins-hbase20:36965] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:55:57,598 INFO [M:0;jenkins-hbase20:36965] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:55:57,598 INFO [M:0;jenkins-hbase20:36965] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:55:57,599 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:55:57,599 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:57,599 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:57,599 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:55:57,599 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:57,599 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.20 KB heapSize=45.83 KB 2023-05-24 16:55:57,609 INFO [M:0;jenkins-hbase20:36965] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.20 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7c16765819494eaebd3d091619e132e4 2023-05-24 16:55:57,615 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7c16765819494eaebd3d091619e132e4 as hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7c16765819494eaebd3d091619e132e4 2023-05-24 16:55:57,621 INFO [M:0;jenkins-hbase20:36965] regionserver.HStore(1080): Added hdfs://localhost.localdomain:45333/user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7c16765819494eaebd3d091619e132e4, entries=11, sequenceid=92, filesize=7.0 K 2023-05-24 16:55:57,622 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegion(2948): Finished flush of dataSize ~38.20 KB/39113, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=92, compaction requested=false 2023-05-24 16:55:57,623 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:57,624 DEBUG [M:0;jenkins-hbase20:36965] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:57,624 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1dcfeb7b-16d6-05b9-ab82-d33dacf985ed/MasterData/WALs/jenkins-hbase20.apache.org,36965,1684947302495 2023-05-24 16:55:57,627 INFO [M:0;jenkins-hbase20:36965] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:55:57,627 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:55:57,628 INFO [M:0;jenkins-hbase20:36965] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36965 2023-05-24 16:55:57,630 DEBUG [M:0;jenkins-hbase20:36965] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,36965,1684947302495 already deleted, retry=false 2023-05-24 16:55:57,694 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:57,694 INFO [RS:0;jenkins-hbase20:36253] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36253,1684947302534; zookeeper connection closed. 2023-05-24 16:55:57,694 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): regionserver:36253-0x1017e6590720001, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:57,695 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@666a5b06] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@666a5b06 2023-05-24 16:55:57,701 INFO [Listener at localhost.localdomain/45603] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 16:55:57,794 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:57,795 DEBUG [Listener at localhost.localdomain/39691-EventThread] zookeeper.ZKWatcher(600): master:36965-0x1017e6590720000, quorum=127.0.0.1:63205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:55:57,794 INFO [M:0;jenkins-hbase20:36965] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36965,1684947302495; zookeeper connection closed. 2023-05-24 16:55:57,796 WARN [Listener at localhost.localdomain/45603] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:57,801 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:57,908 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:57,908 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid ca21e474-cc65-4919-8e09-03e5a82360b6) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:57,910 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data3/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:57,910 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data4/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:57,913 WARN [Listener at localhost.localdomain/45603] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:55:57,919 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:55:58,026 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:55:58,026 WARN [BP-57616634-148.251.75.209-1684947302005 heartbeating to localhost.localdomain/127.0.0.1:45333] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-57616634-148.251.75.209-1684947302005 (Datanode Uuid 78296540-a500-4023-b68e-15ca1144a2eb) service to localhost.localdomain/127.0.0.1:45333 2023-05-24 16:55:58,027 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data1/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:58,027 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/cluster_4b50dddf-ec5c-c621-b9a4-3606303ec6ea/dfs/data/data2/current/BP-57616634-148.251.75.209-1684947302005] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:55:58,040 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:55:58,158 INFO [Listener at localhost.localdomain/45603] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:55:58,172 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:55:58,181 INFO [Listener at localhost.localdomain/45603] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:45333 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45603 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:45333 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:45333 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1584293186) connection to localhost.localdomain/127.0.0.1:45333 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:45333 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=468 (was 467) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=199 (was 134) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=10163 (was 9954) - AvailableMemoryMB LEAK? - 2023-05-24 16:55:58,188 INFO [Listener at localhost.localdomain/45603] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=468, MaxFileDescriptor=60000, SystemLoadAverage=199, ProcessCount=169, AvailableMemoryMB=10163 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/hadoop.log.dir so I do NOT create it in target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/77c0162a-4480-e839-ff99-29b8759e1ca1/hadoop.tmp.dir so I do NOT create it in target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb, deleteOnExit=true 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/test.cache.data in system properties and HBase conf 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:55:58,189 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:55:58,190 DEBUG [Listener at localhost.localdomain/45603] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:55:58,190 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:55:58,191 INFO [Listener at localhost.localdomain/45603] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:55:58,193 WARN [Listener at localhost.localdomain/45603] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:55:58,194 WARN [Listener at localhost.localdomain/45603] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:55:58,194 WARN [Listener at localhost.localdomain/45603] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:55:58,220 WARN [Listener at localhost.localdomain/45603] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:58,222 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:58,229 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/java.io.tmpdir/Jetty_localhost_localdomain_39763_hdfs____.xw0uu0/webapp 2023-05-24 16:55:58,300 INFO [Listener at localhost.localdomain/45603] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:39763 2023-05-24 16:55:58,301 WARN [Listener at localhost.localdomain/45603] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:55:58,302 WARN [Listener at localhost.localdomain/45603] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:55:58,302 WARN [Listener at localhost.localdomain/45603] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:55:58,325 WARN [Listener at localhost.localdomain/37907] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:58,332 WARN [Listener at localhost.localdomain/37907] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:58,334 WARN [Listener at localhost.localdomain/37907] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:58,335 INFO [Listener at localhost.localdomain/37907] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:58,340 INFO [Listener at localhost.localdomain/37907] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/java.io.tmpdir/Jetty_localhost_35103_datanode____ty4f95/webapp 2023-05-24 16:55:58,411 INFO [Listener at localhost.localdomain/37907] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35103 2023-05-24 16:55:58,417 WARN [Listener at localhost.localdomain/38955] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:58,426 WARN [Listener at localhost.localdomain/38955] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:55:58,428 WARN [Listener at localhost.localdomain/38955] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:55:58,429 INFO [Listener at localhost.localdomain/38955] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:55:58,432 INFO [Listener at localhost.localdomain/38955] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/java.io.tmpdir/Jetty_localhost_36337_datanode____qvjtyz/webapp 2023-05-24 16:55:58,475 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x785d1e4bdbebad91: Processing first storage report for DS-05569dcd-0245-46ad-b554-a69122f39565 from datanode bc60395f-3716-4fd4-bb4a-ec24be23ad7c 2023-05-24 16:55:58,475 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x785d1e4bdbebad91: from storage DS-05569dcd-0245-46ad-b554-a69122f39565 node DatanodeRegistration(127.0.0.1:42165, datanodeUuid=bc60395f-3716-4fd4-bb4a-ec24be23ad7c, infoPort=41705, infoSecurePort=0, ipcPort=38955, storageInfo=lv=-57;cid=testClusterID;nsid=78051256;c=1684947358196), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:58,475 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x785d1e4bdbebad91: Processing first storage report for DS-0d7724e2-0088-491d-932f-c5df52a0e2a4 from datanode bc60395f-3716-4fd4-bb4a-ec24be23ad7c 2023-05-24 16:55:58,475 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x785d1e4bdbebad91: from storage DS-0d7724e2-0088-491d-932f-c5df52a0e2a4 node DatanodeRegistration(127.0.0.1:42165, datanodeUuid=bc60395f-3716-4fd4-bb4a-ec24be23ad7c, infoPort=41705, infoSecurePort=0, ipcPort=38955, storageInfo=lv=-57;cid=testClusterID;nsid=78051256;c=1684947358196), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:58,510 INFO [Listener at localhost.localdomain/38955] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36337 2023-05-24 16:55:58,516 WARN [Listener at localhost.localdomain/37233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:55:58,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5bfbd84fa26346da: Processing first storage report for DS-d45b1c83-17cf-4463-8a36-b63389354241 from datanode b9810d3b-3d7f-4e99-9d6b-49cdef791cc1 2023-05-24 16:55:58,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5bfbd84fa26346da: from storage DS-d45b1c83-17cf-4463-8a36-b63389354241 node DatanodeRegistration(127.0.0.1:37957, datanodeUuid=b9810d3b-3d7f-4e99-9d6b-49cdef791cc1, infoPort=34955, infoSecurePort=0, ipcPort=37233, storageInfo=lv=-57;cid=testClusterID;nsid=78051256;c=1684947358196), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:58,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5bfbd84fa26346da: Processing first storage report for DS-cb35e4d7-35d5-4bda-badf-b205ca631c1c from datanode b9810d3b-3d7f-4e99-9d6b-49cdef791cc1 2023-05-24 16:55:58,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5bfbd84fa26346da: from storage DS-cb35e4d7-35d5-4bda-badf-b205ca631c1c node DatanodeRegistration(127.0.0.1:37957, datanodeUuid=b9810d3b-3d7f-4e99-9d6b-49cdef791cc1, infoPort=34955, infoSecurePort=0, ipcPort=37233, storageInfo=lv=-57;cid=testClusterID;nsid=78051256;c=1684947358196), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:55:58,626 DEBUG [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226 2023-05-24 16:55:58,631 INFO [Listener at localhost.localdomain/37233] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/zookeeper_0, clientPort=50259, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:55:58,632 INFO [Listener at localhost.localdomain/37233] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50259 2023-05-24 16:55:58,633 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,634 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,649 INFO [Listener at localhost.localdomain/37233] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe with version=8 2023-05-24 16:55:58,649 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:55:58,650 INFO [Listener at localhost.localdomain/37233] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:55:58,651 INFO [Listener at localhost.localdomain/37233] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:55:58,652 INFO [Listener at localhost.localdomain/37233] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40141 2023-05-24 16:55:58,652 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,653 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,654 INFO [Listener at localhost.localdomain/37233] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40141 connecting to ZooKeeper ensemble=127.0.0.1:50259 2023-05-24 16:55:58,659 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:401410x0, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:55:58,660 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40141-0x1017e666bd10000 connected 2023-05-24 16:55:58,672 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:58,673 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:58,673 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:55:58,674 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40141 2023-05-24 16:55:58,674 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40141 2023-05-24 16:55:58,674 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40141 2023-05-24 16:55:58,674 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40141 2023-05-24 16:55:58,674 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40141 2023-05-24 16:55:58,675 INFO [Listener at localhost.localdomain/37233] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe, hbase.cluster.distributed=false 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:55:58,686 INFO [Listener at localhost.localdomain/37233] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:55:58,687 INFO [Listener at localhost.localdomain/37233] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:55:58,688 INFO [Listener at localhost.localdomain/37233] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37009 2023-05-24 16:55:58,688 INFO [Listener at localhost.localdomain/37233] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:55:58,689 DEBUG [Listener at localhost.localdomain/37233] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:55:58,689 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,690 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,691 INFO [Listener at localhost.localdomain/37233] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37009 connecting to ZooKeeper ensemble=127.0.0.1:50259 2023-05-24 16:55:58,693 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:370090x0, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:55:58,694 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): regionserver:370090x0, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:55:58,694 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37009-0x1017e666bd10001 connected 2023-05-24 16:55:58,695 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:55:58,695 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:55:58,696 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37009 2023-05-24 16:55:58,696 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37009 2023-05-24 16:55:58,697 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37009 2023-05-24 16:55:58,697 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37009 2023-05-24 16:55:58,697 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37009 2023-05-24 16:55:58,698 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,699 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:55:58,700 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,700 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:58,700 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:55:58,700 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,701 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:55:58,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:55:58,701 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,40141,1684947358650 from backup master directory 2023-05-24 16:55:58,702 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,702 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:55:58,702 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:55:58,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,715 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/hbase.id with ID: 3cfdcb92-0615-4f8d-8764-41f8c3793b17 2023-05-24 16:55:58,726 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:58,728 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,734 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6b5a0193 to 127.0.0.1:50259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:58,743 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ec2bd38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:58,743 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:58,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:55:58,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:58,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store-tmp 2023-05-24 16:55:58,752 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:55:58,753 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:58,753 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:58,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/WALs/jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,756 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40141%2C1684947358650, suffix=, logDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/WALs/jenkins-hbase20.apache.org,40141,1684947358650, archiveDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/oldWALs, maxLogs=10 2023-05-24 16:55:58,765 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/WALs/jenkins-hbase20.apache.org,40141,1684947358650/jenkins-hbase20.apache.org%2C40141%2C1684947358650.1684947358756 2023-05-24 16:55:58,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK], DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK]] 2023-05-24 16:55:58,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:58,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:58,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,767 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,768 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:55:58,769 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:55:58,769 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:58,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,770 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,773 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:55:58,774 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:58,775 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=724642, jitterRate=-0.07857127487659454}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:55:58,775 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:55:58,775 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:55:58,777 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:55:58,777 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:55:58,777 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:55:58,777 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 16:55:58,778 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 16:55:58,778 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:55:58,778 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:55:58,780 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:55:58,789 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:55:58,789 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:55:58,789 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:55:58,790 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:55:58,790 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:55:58,791 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:55:58,792 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:55:58,793 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:55:58,794 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:58,794 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:55:58,794 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,794 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,40141,1684947358650, sessionid=0x1017e666bd10000, setting cluster-up flag (Was=false) 2023-05-24 16:55:58,797 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,799 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:55:58,800 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,802 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:58,805 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:55:58,806 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:58,807 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.hbase-snapshot/.tmp 2023-05-24 16:55:58,809 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:55:58,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,814 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947388814 2023-05-24 16:55:58,814 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:55:58,814 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:55:58,814 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:55:58,815 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:58,815 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:55:58,815 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:55:58,816 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947358816,5,FailOnTimeoutGroup] 2023-05-24 16:55:58,816 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947358816,5,FailOnTimeoutGroup] 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,816 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,817 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:58,827 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:58,827 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:58,827 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe 2023-05-24 16:55:58,836 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:58,837 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:55:58,838 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/info 2023-05-24 16:55:58,839 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:55:58,839 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:58,840 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:55:58,841 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:58,841 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:55:58,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:58,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:55:58,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/table 2023-05-24 16:55:58,843 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:55:58,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:58,845 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740 2023-05-24 16:55:58,845 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740 2023-05-24 16:55:58,847 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:55:58,849 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:55:58,851 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:58,851 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=720013, jitterRate=-0.08445696532726288}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:55:58,851 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:55:58,852 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:55:58,852 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:55:58,852 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:55:58,852 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:55:58,852 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:55:58,852 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:55:58,852 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:55:58,853 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:55:58,853 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:55:58,853 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:55:58,855 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:55:58,856 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:55:58,866 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:55:58,901 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(951): ClusterId : 3cfdcb92-0615-4f8d-8764-41f8c3793b17 2023-05-24 16:55:58,902 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:55:58,906 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:55:58,906 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:55:58,909 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:55:58,911 DEBUG [RS:0;jenkins-hbase20:37009] zookeeper.ReadOnlyZKClient(139): Connect 0x681244a9 to 127.0.0.1:50259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:58,924 DEBUG [RS:0;jenkins-hbase20:37009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66d5a7db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:58,924 DEBUG [RS:0;jenkins-hbase20:37009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62ea2fd9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:55:58,939 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:37009 2023-05-24 16:55:58,939 INFO [RS:0;jenkins-hbase20:37009] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:55:58,939 INFO [RS:0;jenkins-hbase20:37009] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:55:58,939 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:55:58,940 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,40141,1684947358650 with isa=jenkins-hbase20.apache.org/148.251.75.209:37009, startcode=1684947358686 2023-05-24 16:55:58,940 DEBUG [RS:0;jenkins-hbase20:37009] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:55:58,944 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57683, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:55:58,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,946 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe 2023-05-24 16:55:58,946 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37907 2023-05-24 16:55:58,946 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:55:58,947 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:55:58,948 DEBUG [RS:0;jenkins-hbase20:37009] zookeeper.ZKUtil(162): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,948 WARN [RS:0;jenkins-hbase20:37009] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:55:58,948 INFO [RS:0;jenkins-hbase20:37009] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:58,948 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,948 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,37009,1684947358686] 2023-05-24 16:55:58,952 DEBUG [RS:0;jenkins-hbase20:37009] zookeeper.ZKUtil(162): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,953 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:55:58,953 INFO [RS:0;jenkins-hbase20:37009] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:55:58,954 INFO [RS:0;jenkins-hbase20:37009] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:55:58,955 INFO [RS:0;jenkins-hbase20:37009] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:55:58,955 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,955 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:55:58,956 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,956 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,956 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 DEBUG [RS:0;jenkins-hbase20:37009] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:55:58,957 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,958 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,958 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,971 INFO [RS:0;jenkins-hbase20:37009] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:55:58,971 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37009,1684947358686-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:58,979 INFO [RS:0;jenkins-hbase20:37009] regionserver.Replication(203): jenkins-hbase20.apache.org,37009,1684947358686 started 2023-05-24 16:55:58,979 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,37009,1684947358686, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:37009, sessionid=0x1017e666bd10001 2023-05-24 16:55:58,979 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:55:58,979 DEBUG [RS:0;jenkins-hbase20:37009] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,979 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37009,1684947358686' 2023-05-24 16:55:58,979 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37009,1684947358686' 2023-05-24 16:55:58,980 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:55:58,981 DEBUG [RS:0;jenkins-hbase20:37009] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:55:58,981 DEBUG [RS:0;jenkins-hbase20:37009] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:55:58,981 INFO [RS:0;jenkins-hbase20:37009] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:55:58,981 INFO [RS:0;jenkins-hbase20:37009] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:55:59,006 DEBUG [jenkins-hbase20:40141] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:55:59,007 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37009,1684947358686, state=OPENING 2023-05-24 16:55:59,009 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:55:59,010 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:59,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37009,1684947358686}] 2023-05-24 16:55:59,011 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:55:59,084 INFO [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37009%2C1684947358686, suffix=, logDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686, archiveDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs, maxLogs=32 2023-05-24 16:55:59,097 INFO [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947359085 2023-05-24 16:55:59,097 DEBUG [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK], DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK]] 2023-05-24 16:55:59,166 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:59,166 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:55:59,169 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:55:59,175 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:55:59,175 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:55:59,179 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37009%2C1684947358686.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686, archiveDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs, maxLogs=32 2023-05-24 16:55:59,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.meta.1684947359179.meta 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK], DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK]] 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:55:59,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:55:59,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:55:59,191 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:55:59,192 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/info 2023-05-24 16:55:59,192 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/info 2023-05-24 16:55:59,192 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:55:59,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:59,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:55:59,194 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:59,194 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:55:59,194 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:55:59,194 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:59,194 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:55:59,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/table 2023-05-24 16:55:59,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/table 2023-05-24 16:55:59,196 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:55:59,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:59,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740 2023-05-24 16:55:59,198 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740 2023-05-24 16:55:59,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:55:59,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:55:59,203 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=796695, jitterRate=0.013050884008407593}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:55:59,203 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:55:59,205 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947359166 2023-05-24 16:55:59,209 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:55:59,210 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:55:59,211 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37009,1684947358686, state=OPEN 2023-05-24 16:55:59,212 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:55:59,212 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:55:59,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:55:59,215 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37009,1684947358686 in 201 msec 2023-05-24 16:55:59,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:55:59,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-05-24 16:55:59,219 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 410 msec 2023-05-24 16:55:59,219 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947359219, completionTime=-1 2023-05-24 16:55:59,219 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:55:59,219 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:55:59,223 DEBUG [hconnection-0x5bd616d4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:55:59,224 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:55:59,226 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:55:59,226 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947419226 2023-05-24 16:55:59,226 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947479226 2023-05-24 16:55:59,226 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40141,1684947358650-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40141,1684947358650-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40141,1684947358650-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:40141, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:55:59,232 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:59,233 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:55:59,233 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:55:59,236 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:55:59,237 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:55:59,239 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/hbase/namespace/99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,240 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/hbase/namespace/99777e4fe481432b85b97558bdf51934 empty. 2023-05-24 16:55:59,240 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/hbase/namespace/99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,240 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:55:59,261 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:59,263 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 99777e4fe481432b85b97558bdf51934, NAME => 'hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp 2023-05-24 16:55:59,274 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:59,274 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 99777e4fe481432b85b97558bdf51934, disabling compactions & flushes 2023-05-24 16:55:59,274 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,274 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,274 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. after waiting 0 ms 2023-05-24 16:55:59,274 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,275 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,275 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 99777e4fe481432b85b97558bdf51934: 2023-05-24 16:55:59,277 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:55:59,278 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947359278"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947359278"}]},"ts":"1684947359278"} 2023-05-24 16:55:59,281 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:55:59,283 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:55:59,283 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947359283"}]},"ts":"1684947359283"} 2023-05-24 16:55:59,284 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:55:59,289 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=99777e4fe481432b85b97558bdf51934, ASSIGN}] 2023-05-24 16:55:59,291 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=99777e4fe481432b85b97558bdf51934, ASSIGN 2023-05-24 16:55:59,292 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=99777e4fe481432b85b97558bdf51934, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37009,1684947358686; forceNewPlan=false, retain=false 2023-05-24 16:55:59,443 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=99777e4fe481432b85b97558bdf51934, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:59,443 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947359443"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947359443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947359443"}]},"ts":"1684947359443"} 2023-05-24 16:55:59,445 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 99777e4fe481432b85b97558bdf51934, server=jenkins-hbase20.apache.org,37009,1684947358686}] 2023-05-24 16:55:59,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 99777e4fe481432b85b97558bdf51934, NAME => 'hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:55:59,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:59,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,606 INFO [StoreOpener-99777e4fe481432b85b97558bdf51934-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,608 DEBUG [StoreOpener-99777e4fe481432b85b97558bdf51934-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/info 2023-05-24 16:55:59,608 DEBUG [StoreOpener-99777e4fe481432b85b97558bdf51934-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/info 2023-05-24 16:55:59,609 INFO [StoreOpener-99777e4fe481432b85b97558bdf51934-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99777e4fe481432b85b97558bdf51934 columnFamilyName info 2023-05-24 16:55:59,609 INFO [StoreOpener-99777e4fe481432b85b97558bdf51934-1] regionserver.HStore(310): Store=99777e4fe481432b85b97558bdf51934/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:55:59,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,614 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 99777e4fe481432b85b97558bdf51934 2023-05-24 16:55:59,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:55:59,617 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 99777e4fe481432b85b97558bdf51934; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=822613, jitterRate=0.046007364988327026}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:55:59,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 99777e4fe481432b85b97558bdf51934: 2023-05-24 16:55:59,620 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934., pid=6, masterSystemTime=1684947359598 2023-05-24 16:55:59,623 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,623 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:55:59,623 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=99777e4fe481432b85b97558bdf51934, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:59,624 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947359623"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947359623"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947359623"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947359623"}]},"ts":"1684947359623"} 2023-05-24 16:55:59,629 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:55:59,629 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 99777e4fe481432b85b97558bdf51934, server=jenkins-hbase20.apache.org,37009,1684947358686 in 181 msec 2023-05-24 16:55:59,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:55:59,632 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=99777e4fe481432b85b97558bdf51934, ASSIGN in 340 msec 2023-05-24 16:55:59,633 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:55:59,634 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947359633"}]},"ts":"1684947359633"} 2023-05-24 16:55:59,636 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:55:59,638 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:55:59,639 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:55:59,639 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:59,640 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:59,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 407 msec 2023-05-24 16:55:59,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:55:59,656 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:59,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-05-24 16:55:59,666 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:55:59,674 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:55:59,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-24 16:55:59,689 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:55:59,691 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.989sec 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40141,1684947358650-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:55:59,691 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40141,1684947358650-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:55:59,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:55:59,700 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ReadOnlyZKClient(139): Connect 0x5aae965e to 127.0.0.1:50259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:55:59,706 DEBUG [Listener at localhost.localdomain/37233] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@850dd75, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:55:59,708 DEBUG [hconnection-0x2138f06c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:55:59,711 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:55:59,713 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:55:59,713 INFO [Listener at localhost.localdomain/37233] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:55:59,718 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:55:59,718 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:55:59,720 INFO [Listener at localhost.localdomain/37233] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:55:59,722 DEBUG [Listener at localhost.localdomain/37233] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 16:55:59,726 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 16:55:59,728 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 16:55:59,728 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 16:55:59,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:55:59,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:55:59,733 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:55:59,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-24 16:55:59,735 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:55:59,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:55:59,737 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:55:59,738 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59 empty. 2023-05-24 16:55:59,738 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:55:59,739 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-24 16:55:59,750 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 16:55:59,751 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6c8bc4bcf3bc812a87ea57dc3d98f59, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/.tmp 2023-05-24 16:55:59,758 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:55:59,759 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing f6c8bc4bcf3bc812a87ea57dc3d98f59, disabling compactions & flushes 2023-05-24 16:55:59,759 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:55:59,759 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:55:59,759 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. after waiting 0 ms 2023-05-24 16:55:59,759 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:55:59,759 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:55:59,759 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:55:59,761 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:55:59,762 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684947359762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947359762"}]},"ts":"1684947359762"} 2023-05-24 16:55:59,764 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:55:59,765 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:55:59,765 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947359765"}]},"ts":"1684947359765"} 2023-05-24 16:55:59,766 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-24 16:55:59,770 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=f6c8bc4bcf3bc812a87ea57dc3d98f59, ASSIGN}] 2023-05-24 16:55:59,772 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=f6c8bc4bcf3bc812a87ea57dc3d98f59, ASSIGN 2023-05-24 16:55:59,772 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=f6c8bc4bcf3bc812a87ea57dc3d98f59, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37009,1684947358686; forceNewPlan=false, retain=false 2023-05-24 16:55:59,924 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f6c8bc4bcf3bc812a87ea57dc3d98f59, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:55:59,924 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684947359924"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947359924"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947359924"}]},"ts":"1684947359924"} 2023-05-24 16:55:59,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure f6c8bc4bcf3bc812a87ea57dc3d98f59, server=jenkins-hbase20.apache.org,37009,1684947358686}] 2023-05-24 16:56:00,089 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:00,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6c8bc4bcf3bc812a87ea57dc3d98f59, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:56:00,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:56:00,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,093 INFO [StoreOpener-f6c8bc4bcf3bc812a87ea57dc3d98f59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,094 DEBUG [StoreOpener-f6c8bc4bcf3bc812a87ea57dc3d98f59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info 2023-05-24 16:56:00,094 DEBUG [StoreOpener-f6c8bc4bcf3bc812a87ea57dc3d98f59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info 2023-05-24 16:56:00,095 INFO [StoreOpener-f6c8bc4bcf3bc812a87ea57dc3d98f59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6c8bc4bcf3bc812a87ea57dc3d98f59 columnFamilyName info 2023-05-24 16:56:00,095 INFO [StoreOpener-f6c8bc4bcf3bc812a87ea57dc3d98f59-1] regionserver.HStore(310): Store=f6c8bc4bcf3bc812a87ea57dc3d98f59/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:56:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,099 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:56:00,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:56:00,101 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f6c8bc4bcf3bc812a87ea57dc3d98f59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=825754, jitterRate=0.05000069737434387}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:56:00,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:00,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59., pid=11, masterSystemTime=1684947360080 2023-05-24 16:56:00,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:00,104 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:00,105 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f6c8bc4bcf3bc812a87ea57dc3d98f59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:00,105 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684947360105"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947360105"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947360105"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947360105"}]},"ts":"1684947360105"} 2023-05-24 16:56:00,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 16:56:00,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure f6c8bc4bcf3bc812a87ea57dc3d98f59, server=jenkins-hbase20.apache.org,37009,1684947358686 in 181 msec 2023-05-24 16:56:00,113 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 16:56:00,113 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=f6c8bc4bcf3bc812a87ea57dc3d98f59, ASSIGN in 341 msec 2023-05-24 16:56:00,114 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:56:00,114 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947360114"}]},"ts":"1684947360114"} 2023-05-24 16:56:00,116 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-24 16:56:00,118 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:56:00,120 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 390 msec 2023-05-24 16:56:04,744 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:56:04,953 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:09,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:56:09,738 INFO [Listener at localhost.localdomain/37233] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-24 16:56:09,744 DEBUG [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:09,744 DEBUG [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:09,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 16:56:09,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-24 16:56:09,770 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-24 16:56:09,770 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:09,771 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-24 16:56:09,771 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-24 16:56:09,772 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,772 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 16:56:09,773 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:09,773 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,773 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:09,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:09,774 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,774 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 16:56:09,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 16:56:09,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,775 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 16:56:09,775 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 16:56:09,775 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-24 16:56:09,777 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-24 16:56:09,777 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-24 16:56:09,777 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:09,778 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-24 16:56:09,779 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 16:56:09,779 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 16:56:09,779 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:56:09,780 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. started... 2023-05-24 16:56:09,780 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 99777e4fe481432b85b97558bdf51934 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:56:09,793 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/.tmp/info/606f5ea8c3f04bd7b5d5ec5934a95804 2023-05-24 16:56:09,804 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/.tmp/info/606f5ea8c3f04bd7b5d5ec5934a95804 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/info/606f5ea8c3f04bd7b5d5ec5934a95804 2023-05-24 16:56:09,811 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/info/606f5ea8c3f04bd7b5d5ec5934a95804, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 16:56:09,812 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 99777e4fe481432b85b97558bdf51934 in 32ms, sequenceid=6, compaction requested=false 2023-05-24 16:56:09,813 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 99777e4fe481432b85b97558bdf51934: 2023-05-24 16:56:09,813 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:56:09,813 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 16:56:09,813 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 16:56:09,813 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,813 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-24 16:56:09,813 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-24 16:56:09,815 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,815 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:09,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:09,815 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,816 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 16:56:09,816 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:09,816 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:09,816 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,817 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,817 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:09,817 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-24 16:56:09,818 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-24 16:56:09,818 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@439c74a0[Count = 0] remaining members to acquire global barrier 2023-05-24 16:56:09,818 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,819 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,819 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,819 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,819 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-24 16:56:09,820 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,820 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-24 16:56:09,820 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 16:56:09,820 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,37009,1684947358686' in zk 2023-05-24 16:56:09,821 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-24 16:56:09,821 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,821 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:09,821 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,821 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-24 16:56:09,821 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:09,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:09,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:09,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:09,822 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,823 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,823 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:09,823 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,823 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,824 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,37009,1684947358686': 2023-05-24 16:56:09,824 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-24 16:56:09,824 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 16:56:09,824 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-24 16:56:09,824 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 16:56:09,824 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-24 16:56:09,824 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 16:56:09,826 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:09,826 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:09,826 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:09,826 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,826 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,827 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:09,827 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,827 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,827 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,828 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:09,828 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 16:56:09,828 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,842 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,842 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:09,842 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 16:56:09,842 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:09,842 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:09,842 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:09,842 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 16:56:09,843 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:09,843 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:09,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-24 16:56:09,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 16:56:09,843 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,844 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 16:56:09,844 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 16:56:09,844 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:09,844 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:09,847 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-24 16:56:09,847 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 16:56:19,847 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 16:56:19,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 16:56:19,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 16:56:19,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,864 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:19,864 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:19,864 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 16:56:19,864 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 16:56:19,865 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,865 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,866 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,866 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:19,866 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:19,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:19,866 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,866 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 16:56:19,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 16:56:19,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,867 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,867 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,867 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 16:56:19,867 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:19,868 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 16:56:19,868 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 16:56:19,868 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 16:56:19,868 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:19,868 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. started... 2023-05-24 16:56:19,868 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing f6c8bc4bcf3bc812a87ea57dc3d98f59 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:56:19,883 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/cb9077a3be444fe680768567bbfc2838 2023-05-24 16:56:19,891 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/cb9077a3be444fe680768567bbfc2838 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838 2023-05-24 16:56:19,897 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838, entries=1, sequenceid=5, filesize=5.8 K 2023-05-24 16:56:19,898 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for f6c8bc4bcf3bc812a87ea57dc3d98f59 in 30ms, sequenceid=5, compaction requested=false 2023-05-24 16:56:19,898 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:19,898 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:19,899 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 16:56:19,899 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 16:56:19,899 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,899 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 16:56:19,899 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 16:56:19,900 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,900 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,900 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:19,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:19,901 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,901 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 16:56:19,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:19,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:19,901 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,902 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:19,902 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 16:56:19,903 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@47a3e80b[Count = 0] remaining members to acquire global barrier 2023-05-24 16:56:19,903 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 16:56:19,903 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,903 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,903 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,903 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,904 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 16:56:19,904 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 16:56:19,904 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,37009,1684947358686' in zk 2023-05-24 16:56:19,904 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,904 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 16:56:19,905 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,905 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 16:56:19,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:19,905 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:19,905 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:19,905 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 16:56:19,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:19,906 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:19,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,907 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:19,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,909 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,37009,1684947358686': 2023-05-24 16:56:19,909 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 16:56:19,909 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 16:56:19,909 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 16:56:19,909 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 16:56:19,909 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,909 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 16:56:19,914 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,914 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,914 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:19,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:19,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,914 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:19,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:19,915 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:19,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:19,915 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:19,916 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,916 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,916 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:19,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,917 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,925 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:19,925 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:19,925 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:19,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:19,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 16:56:19,925 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:19,926 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,926 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,926 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:19,926 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:19,926 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 16:56:19,926 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:19,926 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 16:56:29,926 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 16:56:29,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 16:56:29,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 16:56:29,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 16:56:29,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,944 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:29,944 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:29,945 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 16:56:29,945 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 16:56:29,946 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,946 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,947 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,947 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:29,947 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:29,947 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:29,947 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,947 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 16:56:29,947 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,948 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 16:56:29,948 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,948 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,948 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 16:56:29,948 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,948 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 16:56:29,948 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:29,949 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 16:56:29,949 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 16:56:29,949 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 16:56:29,949 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:29,949 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. started... 2023-05-24 16:56:29,949 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing f6c8bc4bcf3bc812a87ea57dc3d98f59 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:56:29,959 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/4f7b48542ef24580b3b6a67b41678866 2023-05-24 16:56:29,968 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/4f7b48542ef24580b3b6a67b41678866 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866 2023-05-24 16:56:29,974 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866, entries=1, sequenceid=9, filesize=5.8 K 2023-05-24 16:56:29,975 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for f6c8bc4bcf3bc812a87ea57dc3d98f59 in 26ms, sequenceid=9, compaction requested=false 2023-05-24 16:56:29,975 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:29,975 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:29,975 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 16:56:29,975 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 16:56:29,975 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,975 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 16:56:29,975 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 16:56:29,977 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,977 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:29,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:29,978 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,978 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 16:56:29,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:29,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:29,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,979 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,979 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:29,980 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 16:56:29,980 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2a17e9b[Count = 0] remaining members to acquire global barrier 2023-05-24 16:56:29,980 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 16:56:29,980 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,980 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,981 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,981 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,981 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 16:56:29,981 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 16:56:29,981 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,37009,1684947358686' in zk 2023-05-24 16:56:29,981 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,981 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 16:56:29,982 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,982 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 16:56:29,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:29,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:29,983 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:29,983 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 16:56:29,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:29,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:29,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:29,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,986 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,37009,1684947358686': 2023-05-24 16:56:29,986 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 16:56:29,986 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 16:56:29,987 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 16:56:29,987 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 16:56:29,987 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,987 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 16:56:29,996 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,996 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,996 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,996 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:29,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:29,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:29,997 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:29,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:29,997 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:29,997 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,998 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,998 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:29,998 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:29,998 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:29,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:29,999 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,000 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:30,005 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:30,005 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:30,005 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,005 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:30,009 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 16:56:30,009 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:30,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 16:56:30,009 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:30,009 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:30,010 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:30,010 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 16:56:30,010 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 16:56:40,010 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 16:56:40,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 16:56:40,032 INFO [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947359085 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947400017 2023-05-24 16:56:40,033 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK], DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK]] 2023-05-24 16:56:40,033 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947359085 is not closed yet, will try archiving it next time 2023-05-24 16:56:40,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 16:56:40,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 16:56:40,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,044 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:40,045 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:40,045 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 16:56:40,045 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 16:56:40,046 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,046 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,047 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,047 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:40,047 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:40,047 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:40,048 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,048 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 16:56:40,048 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,048 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,049 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 16:56:40,049 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,049 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,049 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 16:56:40,049 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,049 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 16:56:40,049 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:40,050 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 16:56:40,050 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 16:56:40,050 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 16:56:40,050 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:40,050 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. started... 2023-05-24 16:56:40,050 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing f6c8bc4bcf3bc812a87ea57dc3d98f59 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:56:40,065 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/06eb74c152e54bcf9abde5ec9d004557 2023-05-24 16:56:40,078 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/06eb74c152e54bcf9abde5ec9d004557 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557 2023-05-24 16:56:40,083 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557, entries=1, sequenceid=13, filesize=5.8 K 2023-05-24 16:56:40,084 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for f6c8bc4bcf3bc812a87ea57dc3d98f59 in 34ms, sequenceid=13, compaction requested=true 2023-05-24 16:56:40,084 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:40,084 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:40,084 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 16:56:40,084 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 16:56:40,084 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,084 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 16:56:40,084 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 16:56:40,086 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,086 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:40,086 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:40,086 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,086 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 16:56:40,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:40,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:40,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,087 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:40,088 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 16:56:40,088 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@137075ef[Count = 0] remaining members to acquire global barrier 2023-05-24 16:56:40,088 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 16:56:40,088 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,088 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,088 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,088 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,088 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 16:56:40,089 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 16:56:40,089 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,089 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 16:56:40,089 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,37009,1684947358686' in zk 2023-05-24 16:56:40,090 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 16:56:40,090 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,090 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:40,090 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,091 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:40,091 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:40,090 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 16:56:40,091 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:40,091 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:40,091 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,092 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,092 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:40,092 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,092 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,093 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,37009,1684947358686': 2023-05-24 16:56:40,093 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 16:56:40,093 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 16:56:40,093 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 16:56:40,093 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 16:56:40,093 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,093 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 16:56:40,094 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,094 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,094 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,094 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:40,094 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:40,094 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:40,094 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:40,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,095 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,095 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:40,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,098 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:40,098 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:40,098 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:40,098 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:40,098 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,098 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:40,098 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 16:56:40,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:40,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 16:56:40,098 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,099 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:40,099 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,099 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:40,100 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,100 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 16:56:50,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 16:56:50,102 DEBUG [Listener at localhost.localdomain/37233] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:56:50,113 DEBUG [Listener at localhost.localdomain/37233] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:56:50,114 DEBUG [Listener at localhost.localdomain/37233] regionserver.HStore(1912): f6c8bc4bcf3bc812a87ea57dc3d98f59/info is initiating minor compaction (all files) 2023-05-24 16:56:50,114 INFO [Listener at localhost.localdomain/37233] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:56:50,114 INFO [Listener at localhost.localdomain/37233] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:56:50,114 INFO [Listener at localhost.localdomain/37233] regionserver.HRegion(2259): Starting compaction of f6c8bc4bcf3bc812a87ea57dc3d98f59/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:50,115 INFO [Listener at localhost.localdomain/37233] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557] into tmpdir=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp, totalSize=17.4 K 2023-05-24 16:56:50,115 DEBUG [Listener at localhost.localdomain/37233] compactions.Compactor(207): Compacting cb9077a3be444fe680768567bbfc2838, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1684947379857 2023-05-24 16:56:50,116 DEBUG [Listener at localhost.localdomain/37233] compactions.Compactor(207): Compacting 4f7b48542ef24580b3b6a67b41678866, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1684947389930 2023-05-24 16:56:50,116 DEBUG [Listener at localhost.localdomain/37233] compactions.Compactor(207): Compacting 06eb74c152e54bcf9abde5ec9d004557, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1684947400015 2023-05-24 16:56:50,134 INFO [Listener at localhost.localdomain/37233] throttle.PressureAwareThroughputController(145): f6c8bc4bcf3bc812a87ea57dc3d98f59#info#compaction#21 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:56:50,148 DEBUG [Listener at localhost.localdomain/37233] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/5a692fccc3dc4f4fb6e6b5b3536bfe68 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/5a692fccc3dc4f4fb6e6b5b3536bfe68 2023-05-24 16:56:50,154 INFO [Listener at localhost.localdomain/37233] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f6c8bc4bcf3bc812a87ea57dc3d98f59/info of f6c8bc4bcf3bc812a87ea57dc3d98f59 into 5a692fccc3dc4f4fb6e6b5b3536bfe68(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:56:50,155 DEBUG [Listener at localhost.localdomain/37233] regionserver.HRegion(2289): Compaction status journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:50,168 INFO [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947400017 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947410156 2023-05-24 16:56:50,168 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK], DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK]] 2023-05-24 16:56:50,168 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947400017 is not closed yet, will try archiving it next time 2023-05-24 16:56:50,169 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947359085 to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947359085 2023-05-24 16:56:50,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 16:56:50,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 16:56:50,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,178 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:50,178 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:50,179 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 16:56:50,179 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 16:56:50,179 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,179 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,181 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,181 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:50,181 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:50,181 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:50,181 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,181 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 16:56:50,181 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,181 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 16:56:50,182 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,182 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,182 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 16:56:50,182 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,182 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 16:56:50,182 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 16:56:50,182 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 16:56:50,183 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 16:56:50,183 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 16:56:50,183 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:50,183 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. started... 2023-05-24 16:56:50,183 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing f6c8bc4bcf3bc812a87ea57dc3d98f59 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:56:50,197 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/38799ec90ed042b298c420ab686f667a 2023-05-24 16:56:50,204 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/38799ec90ed042b298c420ab686f667a as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/38799ec90ed042b298c420ab686f667a 2023-05-24 16:56:50,209 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/38799ec90ed042b298c420ab686f667a, entries=1, sequenceid=18, filesize=5.8 K 2023-05-24 16:56:50,210 INFO [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for f6c8bc4bcf3bc812a87ea57dc3d98f59 in 27ms, sequenceid=18, compaction requested=false 2023-05-24 16:56:50,210 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:56:50,210 DEBUG [rs(jenkins-hbase20.apache.org,37009,1684947358686)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:56:50,210 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 16:56:50,210 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 16:56:50,210 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,210 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 16:56:50,210 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 16:56:50,212 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,212 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:50,212 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:50,212 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,213 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 16:56:50,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:50,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:50,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,213 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:50,214 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,37009,1684947358686' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 16:56:50,214 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 16:56:50,214 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@33693a55[Count = 0] remaining members to acquire global barrier 2023-05-24 16:56:50,214 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,214 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,215 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,215 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,215 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 16:56:50,215 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 16:56:50,215 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,37009,1684947358686' in zk 2023-05-24 16:56:50,215 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,215 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 16:56:50,216 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,216 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 16:56:50,216 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,216 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:50,216 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:50,216 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:50,216 DEBUG [member: 'jenkins-hbase20.apache.org,37009,1684947358686' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 16:56:50,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:50,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:50,217 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:50,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,218 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,219 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,37009,1684947358686': 2023-05-24 16:56:50,219 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,37009,1684947358686' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 16:56:50,219 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 16:56:50,219 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 16:56:50,219 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 16:56:50,219 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,219 INFO [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 16:56:50,220 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,220 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 16:56:50,220 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 16:56:50,220 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,220 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 16:56:50,221 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,221 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,221 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 16:56:50,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,222 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 16:56:50,222 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,223 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,225 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,225 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 16:56:50,225 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,225 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 16:56:50,225 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:56:50,225 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 16:56:50,225 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 16:56:50,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 16:56:50,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 16:56:50,225 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 16:56:50,225 DEBUG [(jenkins-hbase20.apache.org,40141,1684947358650)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 16:56:50,225 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 16:56:50,225 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 16:56:50,226 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 16:57:00,226 DEBUG [Listener at localhost.localdomain/37233] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 16:57:00,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40141] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 16:57:00,239 INFO [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947410156 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947420231 2023-05-24 16:57:00,240 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42165,DS-05569dcd-0245-46ad-b554-a69122f39565,DISK], DatanodeInfoWithStorage[127.0.0.1:37957,DS-d45b1c83-17cf-4463-8a36-b63389354241,DISK]] 2023-05-24 16:57:00,240 DEBUG [Listener at localhost.localdomain/37233] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947410156 is not closed yet, will try archiving it next time 2023-05-24 16:57:00,240 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947400017 to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs/jenkins-hbase20.apache.org%2C37009%2C1684947358686.1684947400017 2023-05-24 16:57:00,240 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:57:00,240 INFO [Listener at localhost.localdomain/37233] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 16:57:00,241 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5aae965e to 127.0.0.1:50259 2023-05-24 16:57:00,242 DEBUG [Listener at localhost.localdomain/37233] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:57:00,242 DEBUG [Listener at localhost.localdomain/37233] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:57:00,242 DEBUG [Listener at localhost.localdomain/37233] util.JVMClusterUtil(257): Found active master hash=1139272516, stopped=false 2023-05-24 16:57:00,242 INFO [Listener at localhost.localdomain/37233] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:57:00,244 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:57:00,244 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:57:00,244 INFO [Listener at localhost.localdomain/37233] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:57:00,244 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:00,245 DEBUG [Listener at localhost.localdomain/37233] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b5a0193 to 127.0.0.1:50259 2023-05-24 16:57:00,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:57:00,245 DEBUG [Listener at localhost.localdomain/37233] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:57:00,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:57:00,245 INFO [Listener at localhost.localdomain/37233] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,37009,1684947358686' ***** 2023-05-24 16:57:00,245 INFO [Listener at localhost.localdomain/37233] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:57:00,246 INFO [RS:0;jenkins-hbase20:37009] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:57:00,246 INFO [RS:0;jenkins-hbase20:37009] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:57:00,246 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:57:00,246 INFO [RS:0;jenkins-hbase20:37009] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:57:00,246 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(3303): Received CLOSE for f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(3303): Received CLOSE for 99777e4fe481432b85b97558bdf51934 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:57:00,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f6c8bc4bcf3bc812a87ea57dc3d98f59, disabling compactions & flushes 2023-05-24 16:57:00,247 DEBUG [RS:0;jenkins-hbase20:37009] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x681244a9 to 127.0.0.1:50259 2023-05-24 16:57:00,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:57:00,247 DEBUG [RS:0;jenkins-hbase20:37009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:57:00,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:57:00,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. after waiting 0 ms 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:57:00,247 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:57:00,247 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing f6c8bc4bcf3bc812a87ea57dc3d98f59 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:57:00,247 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 16:57:00,248 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, f6c8bc4bcf3bc812a87ea57dc3d98f59=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59., 99777e4fe481432b85b97558bdf51934=hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934.} 2023-05-24 16:57:00,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:57:00,249 DEBUG [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1504): Waiting on 1588230740, 99777e4fe481432b85b97558bdf51934, f6c8bc4bcf3bc812a87ea57dc3d98f59 2023-05-24 16:57:00,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:57:00,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:57:00,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:57:00,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:57:00,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-24 16:57:00,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/5818a16069bb475ca530d1fe57f5689d 2023-05-24 16:57:00,266 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/.tmp/info/a230da0ee463403481131b63f9f112ac 2023-05-24 16:57:00,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/.tmp/info/5818a16069bb475ca530d1fe57f5689d as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/5818a16069bb475ca530d1fe57f5689d 2023-05-24 16:57:00,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/5818a16069bb475ca530d1fe57f5689d, entries=1, sequenceid=22, filesize=5.8 K 2023-05-24 16:57:00,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for f6c8bc4bcf3bc812a87ea57dc3d98f59 in 34ms, sequenceid=22, compaction requested=true 2023-05-24 16:57:00,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/.tmp/table/77af661fce624b65be2c0759ae0a91d4 2023-05-24 16:57:00,285 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557] to archive 2023-05-24 16:57:00,286 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 16:57:00,287 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838 to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/cb9077a3be444fe680768567bbfc2838 2023-05-24 16:57:00,288 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866 to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/4f7b48542ef24580b3b6a67b41678866 2023-05-24 16:57:00,289 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557 to hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/info/06eb74c152e54bcf9abde5ec9d004557 2023-05-24 16:57:00,292 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/.tmp/info/a230da0ee463403481131b63f9f112ac as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/info/a230da0ee463403481131b63f9f112ac 2023-05-24 16:57:00,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/f6c8bc4bcf3bc812a87ea57dc3d98f59/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-24 16:57:00,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f6c8bc4bcf3bc812a87ea57dc3d98f59: 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684947359728.f6c8bc4bcf3bc812a87ea57dc3d98f59. 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 99777e4fe481432b85b97558bdf51934, disabling compactions & flushes 2023-05-24 16:57:00,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. after waiting 0 ms 2023-05-24 16:57:00,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:57:00,300 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/info/a230da0ee463403481131b63f9f112ac, entries=20, sequenceid=14, filesize=7.6 K 2023-05-24 16:57:00,301 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/.tmp/table/77af661fce624b65be2c0759ae0a91d4 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/table/77af661fce624b65be2c0759ae0a91d4 2023-05-24 16:57:00,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/namespace/99777e4fe481432b85b97558bdf51934/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 16:57:00,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:57:00,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 99777e4fe481432b85b97558bdf51934: 2023-05-24 16:57:00,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684947359232.99777e4fe481432b85b97558bdf51934. 2023-05-24 16:57:00,307 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/table/77af661fce624b65be2c0759ae0a91d4, entries=4, sequenceid=14, filesize=4.9 K 2023-05-24 16:57:00,308 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 59ms, sequenceid=14, compaction requested=false 2023-05-24 16:57:00,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-24 16:57:00,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 16:57:00,314 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:57:00,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:57:00,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 16:57:00,449 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37009,1684947358686; all regions closed. 2023-05-24 16:57:00,450 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:57:00,461 DEBUG [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs 2023-05-24 16:57:00,461 INFO [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C37009%2C1684947358686.meta:.meta(num 1684947359179) 2023-05-24 16:57:00,461 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/WALs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:57:00,470 DEBUG [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/oldWALs 2023-05-24 16:57:00,470 INFO [RS:0;jenkins-hbase20:37009] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C37009%2C1684947358686:(num 1684947420231) 2023-05-24 16:57:00,470 DEBUG [RS:0;jenkins-hbase20:37009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:57:00,470 INFO [RS:0;jenkins-hbase20:37009] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:57:00,471 INFO [RS:0;jenkins-hbase20:37009] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 16:57:00,471 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:57:00,472 INFO [RS:0;jenkins-hbase20:37009] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37009 2023-05-24 16:57:00,475 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:57:00,475 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37009,1684947358686 2023-05-24 16:57:00,476 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:57:00,476 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,37009,1684947358686] 2023-05-24 16:57:00,476 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,37009,1684947358686; numProcessing=1 2023-05-24 16:57:00,477 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,37009,1684947358686 already deleted, retry=false 2023-05-24 16:57:00,477 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,37009,1684947358686 expired; onlineServers=0 2023-05-24 16:57:00,477 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,40141,1684947358650' ***** 2023-05-24 16:57:00,477 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:57:00,477 DEBUG [M:0;jenkins-hbase20:40141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47d867e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:57:00,477 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:57:00,478 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40141,1684947358650; all regions closed. 2023-05-24 16:57:00,478 DEBUG [M:0;jenkins-hbase20:40141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:57:00,478 DEBUG [M:0;jenkins-hbase20:40141] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:57:00,478 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:57:00,478 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947358816] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947358816,5,FailOnTimeoutGroup] 2023-05-24 16:57:00,478 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947358816] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947358816,5,FailOnTimeoutGroup] 2023-05-24 16:57:00,478 DEBUG [M:0;jenkins-hbase20:40141] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:57:00,479 INFO [M:0;jenkins-hbase20:40141] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:57:00,479 INFO [M:0;jenkins-hbase20:40141] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:57:00,479 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:57:00,479 INFO [M:0;jenkins-hbase20:40141] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:57:00,479 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:00,479 DEBUG [M:0;jenkins-hbase20:40141] master.HMaster(1512): Stopping service threads 2023-05-24 16:57:00,479 INFO [M:0;jenkins-hbase20:40141] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:57:00,479 ERROR [M:0;jenkins-hbase20:40141] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 16:57:00,480 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:57:00,480 INFO [M:0;jenkins-hbase20:40141] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:57:00,480 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:57:00,480 DEBUG [M:0;jenkins-hbase20:40141] zookeeper.ZKUtil(398): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:57:00,480 WARN [M:0;jenkins-hbase20:40141] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:57:00,480 INFO [M:0;jenkins-hbase20:40141] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:57:00,480 INFO [M:0;jenkins-hbase20:40141] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:57:00,481 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:57:00,481 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:00,481 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:00,481 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:57:00,481 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:00,481 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-05-24 16:57:00,500 INFO [M:0;jenkins-hbase20:40141] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/090bf01d446e41368de070389ad57777 2023-05-24 16:57:00,506 INFO [M:0;jenkins-hbase20:40141] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 090bf01d446e41368de070389ad57777 2023-05-24 16:57:00,507 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/090bf01d446e41368de070389ad57777 as hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/090bf01d446e41368de070389ad57777 2023-05-24 16:57:00,511 INFO [M:0;jenkins-hbase20:40141] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 090bf01d446e41368de070389ad57777 2023-05-24 16:57:00,512 INFO [M:0;jenkins-hbase20:40141] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37907/user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/090bf01d446e41368de070389ad57777, entries=11, sequenceid=100, filesize=6.1 K 2023-05-24 16:57:00,512 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=100, compaction requested=false 2023-05-24 16:57:00,513 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:00,513 DEBUG [M:0;jenkins-hbase20:40141] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:57:00,514 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f9ed779-532e-36f1-bf21-484be81e7cbe/MasterData/WALs/jenkins-hbase20.apache.org,40141,1684947358650 2023-05-24 16:57:00,516 INFO [M:0;jenkins-hbase20:40141] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:57:00,516 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:57:00,517 INFO [M:0;jenkins-hbase20:40141] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40141 2023-05-24 16:57:00,518 DEBUG [M:0;jenkins-hbase20:40141] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,40141,1684947358650 already deleted, retry=false 2023-05-24 16:57:00,576 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:57:00,576 INFO [RS:0;jenkins-hbase20:37009] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37009,1684947358686; zookeeper connection closed. 2023-05-24 16:57:00,577 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): regionserver:37009-0x1017e666bd10001, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:57:00,577 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@517383f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@517383f5 2023-05-24 16:57:00,577 INFO [Listener at localhost.localdomain/37233] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 16:57:00,677 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:57:00,677 DEBUG [Listener at localhost.localdomain/37233-EventThread] zookeeper.ZKWatcher(600): master:40141-0x1017e666bd10000, quorum=127.0.0.1:50259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:57:00,677 INFO [M:0;jenkins-hbase20:40141] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40141,1684947358650; zookeeper connection closed. 2023-05-24 16:57:00,679 WARN [Listener at localhost.localdomain/37233] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:57:00,689 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:57:00,800 WARN [BP-1140445682-148.251.75.209-1684947358196 heartbeating to localhost.localdomain/127.0.0.1:37907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:57:00,800 WARN [BP-1140445682-148.251.75.209-1684947358196 heartbeating to localhost.localdomain/127.0.0.1:37907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1140445682-148.251.75.209-1684947358196 (Datanode Uuid b9810d3b-3d7f-4e99-9d6b-49cdef791cc1) service to localhost.localdomain/127.0.0.1:37907 2023-05-24 16:57:00,802 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/dfs/data/data3/current/BP-1140445682-148.251.75.209-1684947358196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:57:00,803 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/dfs/data/data4/current/BP-1140445682-148.251.75.209-1684947358196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:57:00,805 WARN [Listener at localhost.localdomain/37233] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:57:00,810 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:57:00,919 WARN [BP-1140445682-148.251.75.209-1684947358196 heartbeating to localhost.localdomain/127.0.0.1:37907] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:57:00,919 WARN [BP-1140445682-148.251.75.209-1684947358196 heartbeating to localhost.localdomain/127.0.0.1:37907] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1140445682-148.251.75.209-1684947358196 (Datanode Uuid bc60395f-3716-4fd4-bb4a-ec24be23ad7c) service to localhost.localdomain/127.0.0.1:37907 2023-05-24 16:57:00,920 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/dfs/data/data1/current/BP-1140445682-148.251.75.209-1684947358196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:57:00,921 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/cluster_feb256ab-359a-6b5d-6962-1ed3e07b51eb/dfs/data/data2/current/BP-1140445682-148.251.75.209-1684947358196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:57:00,939 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:57:00,961 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:57:01,051 INFO [Listener at localhost.localdomain/37233] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:57:01,072 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:57:01,080 INFO [Listener at localhost.localdomain/37233] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=94 (was 88) - Thread LEAK? -, OpenFileDescriptor=499 (was 468) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=70 (was 199), ProcessCount=169 (was 169), AvailableMemoryMB=9846 (was 10163) 2023-05-24 16:57:01,087 INFO [Listener at localhost.localdomain/37233] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=95, OpenFileDescriptor=499, MaxFileDescriptor=60000, SystemLoadAverage=70, ProcessCount=169, AvailableMemoryMB=9846 2023-05-24 16:57:01,087 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/hadoop.log.dir so I do NOT create it in target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/43b3fe25-75ca-c68a-2252-03b6e033c226/hadoop.tmp.dir so I do NOT create it in target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c, deleteOnExit=true 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/test.cache.data in system properties and HBase conf 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:57:01,088 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:57:01,089 DEBUG [Listener at localhost.localdomain/37233] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:57:01,089 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:57:01,090 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:57:01,091 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:57:01,091 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:57:01,091 INFO [Listener at localhost.localdomain/37233] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:57:01,092 WARN [Listener at localhost.localdomain/37233] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:57:01,094 WARN [Listener at localhost.localdomain/37233] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:57:01,094 WARN [Listener at localhost.localdomain/37233] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:57:01,119 WARN [Listener at localhost.localdomain/37233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:57:01,121 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:57:01,126 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/java.io.tmpdir/Jetty_localhost_localdomain_45473_hdfs____.wenkmv/webapp 2023-05-24 16:57:01,196 INFO [Listener at localhost.localdomain/37233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:45473 2023-05-24 16:57:01,197 WARN [Listener at localhost.localdomain/37233] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:57:01,198 WARN [Listener at localhost.localdomain/37233] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:57:01,198 WARN [Listener at localhost.localdomain/37233] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:57:01,221 WARN [Listener at localhost.localdomain/42999] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:57:01,231 WARN [Listener at localhost.localdomain/42999] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:57:01,234 WARN [Listener at localhost.localdomain/42999] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:57:01,236 INFO [Listener at localhost.localdomain/42999] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:57:01,243 INFO [Listener at localhost.localdomain/42999] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/java.io.tmpdir/Jetty_localhost_38715_datanode____.5x0077/webapp 2023-05-24 16:57:01,314 INFO [Listener at localhost.localdomain/42999] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38715 2023-05-24 16:57:01,319 WARN [Listener at localhost.localdomain/35111] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:57:01,335 WARN [Listener at localhost.localdomain/35111] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:57:01,339 WARN [Listener at localhost.localdomain/35111] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:57:01,341 INFO [Listener at localhost.localdomain/35111] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:57:01,345 INFO [Listener at localhost.localdomain/35111] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/java.io.tmpdir/Jetty_localhost_45787_datanode____.qfqqtg/webapp 2023-05-24 16:57:01,410 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b82e1194375b836: Processing first storage report for DS-4640d682-4aaf-42a1-971f-44c48e99f33c from datanode 23a7f3e4-6f92-4be5-b789-80176b3c09bb 2023-05-24 16:57:01,410 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b82e1194375b836: from storage DS-4640d682-4aaf-42a1-971f-44c48e99f33c node DatanodeRegistration(127.0.0.1:45905, datanodeUuid=23a7f3e4-6f92-4be5-b789-80176b3c09bb, infoPort=42993, infoSecurePort=0, ipcPort=35111, storageInfo=lv=-57;cid=testClusterID;nsid=1772185129;c=1684947421095), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:57:01,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b82e1194375b836: Processing first storage report for DS-a7c41bca-9138-4969-94fd-402d6415bd4e from datanode 23a7f3e4-6f92-4be5-b789-80176b3c09bb 2023-05-24 16:57:01,411 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b82e1194375b836: from storage DS-a7c41bca-9138-4969-94fd-402d6415bd4e node DatanodeRegistration(127.0.0.1:45905, datanodeUuid=23a7f3e4-6f92-4be5-b789-80176b3c09bb, infoPort=42993, infoSecurePort=0, ipcPort=35111, storageInfo=lv=-57;cid=testClusterID;nsid=1772185129;c=1684947421095), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:57:01,425 INFO [Listener at localhost.localdomain/35111] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45787 2023-05-24 16:57:01,433 WARN [Listener at localhost.localdomain/40087] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:57:01,513 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2035b139cecc455e: Processing first storage report for DS-173d2ba4-c529-4713-b892-0f68ecd170c3 from datanode 308083e5-b77d-4d1a-8e0f-33cf352c941b 2023-05-24 16:57:01,513 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2035b139cecc455e: from storage DS-173d2ba4-c529-4713-b892-0f68ecd170c3 node DatanodeRegistration(127.0.0.1:37073, datanodeUuid=308083e5-b77d-4d1a-8e0f-33cf352c941b, infoPort=39987, infoSecurePort=0, ipcPort=40087, storageInfo=lv=-57;cid=testClusterID;nsid=1772185129;c=1684947421095), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:57:01,513 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2035b139cecc455e: Processing first storage report for DS-eff648ee-528b-41fa-aaa1-bc7ae687b75e from datanode 308083e5-b77d-4d1a-8e0f-33cf352c941b 2023-05-24 16:57:01,513 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2035b139cecc455e: from storage DS-eff648ee-528b-41fa-aaa1-bc7ae687b75e node DatanodeRegistration(127.0.0.1:37073, datanodeUuid=308083e5-b77d-4d1a-8e0f-33cf352c941b, infoPort=39987, infoSecurePort=0, ipcPort=40087, storageInfo=lv=-57;cid=testClusterID;nsid=1772185129;c=1684947421095), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:57:01,540 DEBUG [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30 2023-05-24 16:57:01,544 INFO [Listener at localhost.localdomain/40087] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/zookeeper_0, clientPort=63859, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:57:01,545 INFO [Listener at localhost.localdomain/40087] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63859 2023-05-24 16:57:01,546 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,547 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,565 INFO [Listener at localhost.localdomain/40087] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6 with version=8 2023-05-24 16:57:01,565 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,567 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:57:01,568 INFO [Listener at localhost.localdomain/40087] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:57:01,569 INFO [Listener at localhost.localdomain/40087] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33229 2023-05-24 16:57:01,569 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,570 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,571 INFO [Listener at localhost.localdomain/40087] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33229 connecting to ZooKeeper ensemble=127.0.0.1:63859 2023-05-24 16:57:01,575 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:332290x0, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:57:01,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33229-0x1017e6761930000 connected 2023-05-24 16:57:01,589 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:57:01,590 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:57:01,590 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:57:01,590 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33229 2023-05-24 16:57:01,591 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33229 2023-05-24 16:57:01,591 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33229 2023-05-24 16:57:01,591 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33229 2023-05-24 16:57:01,591 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33229 2023-05-24 16:57:01,591 INFO [Listener at localhost.localdomain/40087] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6, hbase.cluster.distributed=false 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:57:01,603 INFO [Listener at localhost.localdomain/40087] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:57:01,605 INFO [Listener at localhost.localdomain/40087] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43397 2023-05-24 16:57:01,605 INFO [Listener at localhost.localdomain/40087] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:57:01,606 DEBUG [Listener at localhost.localdomain/40087] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:57:01,606 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,607 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,608 INFO [Listener at localhost.localdomain/40087] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43397 connecting to ZooKeeper ensemble=127.0.0.1:63859 2023-05-24 16:57:01,610 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:433970x0, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:57:01,611 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): regionserver:433970x0, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:57:01,612 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43397-0x1017e6761930001 connected 2023-05-24 16:57:01,612 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:57:01,613 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ZKUtil(164): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:57:01,613 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43397 2023-05-24 16:57:01,613 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43397 2023-05-24 16:57:01,613 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43397 2023-05-24 16:57:01,614 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43397 2023-05-24 16:57:01,614 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43397 2023-05-24 16:57:01,615 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,626 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:57:01,627 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,642 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:57:01,642 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:57:01,642 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:57:01,645 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33229,1684947421566 from backup master directory 2023-05-24 16:57:01,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:57:01,650 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,650 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:57:01,650 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:57:01,650 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,670 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/hbase.id with ID: f38c2d64-9a00-414d-b109-6e5bd59aed59 2023-05-24 16:57:01,684 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:01,686 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,692 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x502c25f4 to 127.0.0.1:63859 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:57:01,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3775450b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:57:01,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:57:01,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:57:01,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:57:01,705 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store-tmp 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:57:01,718 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:01,718 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:57:01,718 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:57:01,719 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/WALs/jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33229%2C1684947421566, suffix=, logDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/WALs/jenkins-hbase20.apache.org,33229,1684947421566, archiveDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/oldWALs, maxLogs=10 2023-05-24 16:57:01,732 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/WALs/jenkins-hbase20.apache.org,33229,1684947421566/jenkins-hbase20.apache.org%2C33229%2C1684947421566.1684947421723 2023-05-24 16:57:01,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45905,DS-4640d682-4aaf-42a1-971f-44c48e99f33c,DISK], DatanodeInfoWithStorage[127.0.0.1:37073,DS-173d2ba4-c529-4713-b892-0f68ecd170c3,DISK]] 2023-05-24 16:57:01,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:57:01,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:01,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,732 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,735 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,736 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:57:01,736 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:57:01,736 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:01,737 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,737 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,739 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:57:01,743 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:57:01,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=736214, jitterRate=-0.06385649740695953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:57:01,744 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:57:01,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:57:01,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:57:01,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:57:01,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:57:01,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 16:57:01,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 16:57:01,746 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:57:01,748 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:57:01,749 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:57:01,757 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:57:01,757 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:57:01,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:57:01,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:57:01,758 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:57:01,760 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,760 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:57:01,761 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:57:01,761 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:57:01,762 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:57:01,762 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:57:01,762 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,762 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33229,1684947421566, sessionid=0x1017e6761930000, setting cluster-up flag (Was=false) 2023-05-24 16:57:01,765 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,767 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:57:01,767 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,768 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:01,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:57:01,771 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:01,772 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.hbase-snapshot/.tmp 2023-05-24 16:57:01,776 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:57:01,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947451778 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:57:01,779 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:57:01,782 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,782 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:57:01,782 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:57:01,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:57:01,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:57:01,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:57:01,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:57:01,783 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:57:01,783 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947421783,5,FailOnTimeoutGroup] 2023-05-24 16:57:01,784 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947421783,5,FailOnTimeoutGroup] 2023-05-24 16:57:01,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:57:01,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,784 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:57:01,797 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:57:01,798 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:57:01,798 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6 2023-05-24 16:57:01,856 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(951): ClusterId : f38c2d64-9a00-414d-b109-6e5bd59aed59 2023-05-24 16:57:01,857 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:57:01,860 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:57:01,860 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:57:01,862 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:57:01,863 DEBUG [RS:0;jenkins-hbase20:43397] zookeeper.ReadOnlyZKClient(139): Connect 0x225a82a0 to 127.0.0.1:63859 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:57:01,868 DEBUG [RS:0;jenkins-hbase20:43397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c9d689b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:57:01,869 DEBUG [RS:0;jenkins-hbase20:43397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28a2e9d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:57:01,870 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:01,871 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:57:01,873 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info 2023-05-24 16:57:01,873 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:57:01,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:01,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:57:01,875 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:57:01,875 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:57:01,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:01,876 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:57:01,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/table 2023-05-24 16:57:01,877 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:57:01,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:01,878 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43397 2023-05-24 16:57:01,878 INFO [RS:0;jenkins-hbase20:43397] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:57:01,878 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740 2023-05-24 16:57:01,878 INFO [RS:0;jenkins-hbase20:43397] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:57:01,879 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:57:01,879 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740 2023-05-24 16:57:01,879 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,33229,1684947421566 with isa=jenkins-hbase20.apache.org/148.251.75.209:43397, startcode=1684947421602 2023-05-24 16:57:01,879 DEBUG [RS:0;jenkins-hbase20:43397] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:57:01,882 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:57:01,883 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:40343, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:57:01,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,884 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6 2023-05-24 16:57:01,884 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:57:01,884 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:42999 2023-05-24 16:57:01,884 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:57:01,886 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:57:01,886 DEBUG [RS:0;jenkins-hbase20:43397] zookeeper.ZKUtil(162): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,886 WARN [RS:0;jenkins-hbase20:43397] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:57:01,886 INFO [RS:0;jenkins-hbase20:43397] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:57:01,887 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,887 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43397,1684947421602] 2023-05-24 16:57:01,888 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:57:01,888 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=758733, jitterRate=-0.0352218896150589}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:57:01,888 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:57:01,889 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:57:01,889 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:57:01,889 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:57:01,889 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:57:01,889 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:57:01,889 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:57:01,889 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:57:01,890 DEBUG [RS:0;jenkins-hbase20:43397] zookeeper.ZKUtil(162): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,890 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:57:01,890 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:57:01,890 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:57:01,891 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:57:01,891 INFO [RS:0;jenkins-hbase20:43397] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:57:01,892 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:57:01,893 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:57:01,893 INFO [RS:0;jenkins-hbase20:43397] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:57:01,894 INFO [RS:0;jenkins-hbase20:43397] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:57:01,894 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,894 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:57:01,895 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,896 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,897 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,897 DEBUG [RS:0;jenkins-hbase20:43397] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:57:01,898 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,899 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,899 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,909 INFO [RS:0;jenkins-hbase20:43397] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:57:01,909 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43397,1684947421602-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:01,921 INFO [RS:0;jenkins-hbase20:43397] regionserver.Replication(203): jenkins-hbase20.apache.org,43397,1684947421602 started 2023-05-24 16:57:01,921 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43397,1684947421602, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43397, sessionid=0x1017e6761930001 2023-05-24 16:57:01,921 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:57:01,921 DEBUG [RS:0;jenkins-hbase20:43397] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,921 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43397,1684947421602' 2023-05-24 16:57:01,921 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:57:01,921 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43397,1684947421602' 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:57:01,922 DEBUG [RS:0;jenkins-hbase20:43397] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:57:01,922 INFO [RS:0;jenkins-hbase20:43397] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:57:01,922 INFO [RS:0;jenkins-hbase20:43397] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:57:02,025 INFO [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43397%2C1684947421602, suffix=, logDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602, archiveDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs, maxLogs=32 2023-05-24 16:57:02,039 INFO [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947422026 2023-05-24 16:57:02,039 DEBUG [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37073,DS-173d2ba4-c529-4713-b892-0f68ecd170c3,DISK], DatanodeInfoWithStorage[127.0.0.1:45905,DS-4640d682-4aaf-42a1-971f-44c48e99f33c,DISK]] 2023-05-24 16:57:02,043 DEBUG [jenkins-hbase20:33229] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:57:02,044 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43397,1684947421602, state=OPENING 2023-05-24 16:57:02,045 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:57:02,045 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:02,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:02,046 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:57:02,200 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:02,200 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:57:02,208 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59612, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:57:02,212 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:57:02,212 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:57:02,214 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43397%2C1684947421602.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602, archiveDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs, maxLogs=32 2023-05-24 16:57:02,222 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.meta.1684947422214.meta 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45905,DS-4640d682-4aaf-42a1-971f-44c48e99f33c,DISK], DatanodeInfoWithStorage[127.0.0.1:37073,DS-173d2ba4-c529-4713-b892-0f68ecd170c3,DISK]] 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:57:02,223 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:57:02,223 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:57:02,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:57:02,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info 2023-05-24 16:57:02,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info 2023-05-24 16:57:02,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:57:02,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:02,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:57:02,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:57:02,227 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:57:02,227 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:57:02,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:02,228 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:57:02,228 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/table 2023-05-24 16:57:02,228 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/table 2023-05-24 16:57:02,229 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:57:02,229 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:02,230 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740 2023-05-24 16:57:02,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740 2023-05-24 16:57:02,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:57:02,234 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:57:02,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=861206, jitterRate=0.09508055448532104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:57:02,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:57:02,236 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947422199 2023-05-24 16:57:02,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:57:02,240 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:57:02,241 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43397,1684947421602, state=OPEN 2023-05-24 16:57:02,242 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:57:02,242 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:57:02,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:57:02,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43397,1684947421602 in 196 msec 2023-05-24 16:57:02,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:57:02,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 355 msec 2023-05-24 16:57:02,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 474 msec 2023-05-24 16:57:02,249 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947422249, completionTime=-1 2023-05-24 16:57:02,249 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:57:02,250 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:57:02,252 DEBUG [hconnection-0x1b7fa08f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:57:02,254 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:57:02,256 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:57:02,256 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947482256 2023-05-24 16:57:02,256 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947542256 2023-05-24 16:57:02,256 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33229,1684947421566-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33229,1684947421566-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33229,1684947421566-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33229, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:57:02,262 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:57:02,263 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:57:02,264 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:57:02,266 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:57:02,267 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:57:02,269 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,270 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d empty. 2023-05-24 16:57:02,270 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,270 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:57:02,281 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:57:02,281 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8a4d043522699af70c775e2ba14b314d, NAME => 'hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8a4d043522699af70c775e2ba14b314d, disabling compactions & flushes 2023-05-24 16:57:02,288 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. after waiting 0 ms 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,288 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,288 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8a4d043522699af70c775e2ba14b314d: 2023-05-24 16:57:02,290 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:57:02,291 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947422291"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947422291"}]},"ts":"1684947422291"} 2023-05-24 16:57:02,293 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:57:02,294 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:57:02,294 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947422294"}]},"ts":"1684947422294"} 2023-05-24 16:57:02,296 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:57:02,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8a4d043522699af70c775e2ba14b314d, ASSIGN}] 2023-05-24 16:57:02,302 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8a4d043522699af70c775e2ba14b314d, ASSIGN 2023-05-24 16:57:02,303 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8a4d043522699af70c775e2ba14b314d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43397,1684947421602; forceNewPlan=false, retain=false 2023-05-24 16:57:02,454 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8a4d043522699af70c775e2ba14b314d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:02,454 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947422454"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947422454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947422454"}]},"ts":"1684947422454"} 2023-05-24 16:57:02,455 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8a4d043522699af70c775e2ba14b314d, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:02,618 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8a4d043522699af70c775e2ba14b314d, NAME => 'hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:57:02,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:02,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,622 INFO [StoreOpener-8a4d043522699af70c775e2ba14b314d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,624 DEBUG [StoreOpener-8a4d043522699af70c775e2ba14b314d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/info 2023-05-24 16:57:02,624 DEBUG [StoreOpener-8a4d043522699af70c775e2ba14b314d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/info 2023-05-24 16:57:02,624 INFO [StoreOpener-8a4d043522699af70c775e2ba14b314d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8a4d043522699af70c775e2ba14b314d columnFamilyName info 2023-05-24 16:57:02,625 INFO [StoreOpener-8a4d043522699af70c775e2ba14b314d-1] regionserver.HStore(310): Store=8a4d043522699af70c775e2ba14b314d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:02,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,628 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:57:02,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:57:02,630 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8a4d043522699af70c775e2ba14b314d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=773816, jitterRate=-0.016042664647102356}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:57:02,630 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8a4d043522699af70c775e2ba14b314d: 2023-05-24 16:57:02,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d., pid=6, masterSystemTime=1684947422608 2023-05-24 16:57:02,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,634 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:57:02,634 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8a4d043522699af70c775e2ba14b314d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:02,635 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947422634"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947422634"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947422634"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947422634"}]},"ts":"1684947422634"} 2023-05-24 16:57:02,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:57:02,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8a4d043522699af70c775e2ba14b314d, server=jenkins-hbase20.apache.org,43397,1684947421602 in 181 msec 2023-05-24 16:57:02,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:57:02,641 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8a4d043522699af70c775e2ba14b314d, ASSIGN in 339 msec 2023-05-24 16:57:02,641 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:57:02,641 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947422641"}]},"ts":"1684947422641"} 2023-05-24 16:57:02,643 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:57:02,646 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:57:02,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 384 msec 2023-05-24 16:57:02,665 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:57:02,666 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:57:02,666 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:02,670 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:57:02,683 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:57:02,688 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-05-24 16:57:02,692 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:57:02,699 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:57:02,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-24 16:57:02,705 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:57:02,707 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.056sec 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33229,1684947421566-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:57:02,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33229,1684947421566-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:57:02,708 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:57:02,751 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ReadOnlyZKClient(139): Connect 0x1191a066 to 127.0.0.1:63859 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:57:02,757 DEBUG [Listener at localhost.localdomain/40087] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f7ef667, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:57:02,759 DEBUG [hconnection-0x3df1f624-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:57:02,761 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:57:02,763 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:57:02,763 INFO [Listener at localhost.localdomain/40087] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:57:02,769 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:57:02,769 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:57:02,771 INFO [Listener at localhost.localdomain/40087] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:57:02,774 DEBUG [Listener at localhost.localdomain/40087] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 16:57:02,779 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:37872, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 16:57:02,781 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 16:57:02,782 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 16:57:02,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:57:02,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-24 16:57:02,789 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:57:02,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-24 16:57:02,791 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:57:02,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:57:02,793 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:02,793 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70 empty. 2023-05-24 16:57:02,794 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:02,794 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-24 16:57:02,806 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 16:57:02,807 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => d34c9fdcfa19fb58cb6981fec1d08c70, NAME => 'TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/.tmp 2023-05-24 16:57:02,813 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:02,813 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing d34c9fdcfa19fb58cb6981fec1d08c70, disabling compactions & flushes 2023-05-24 16:57:02,813 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:02,813 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:02,813 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. after waiting 0 ms 2023-05-24 16:57:02,813 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:02,814 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:02,814 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:02,816 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:57:02,817 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947422816"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947422816"}]},"ts":"1684947422816"} 2023-05-24 16:57:02,818 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:57:02,819 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:57:02,819 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947422819"}]},"ts":"1684947422819"} 2023-05-24 16:57:02,820 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-24 16:57:02,822 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, ASSIGN}] 2023-05-24 16:57:02,824 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, ASSIGN 2023-05-24 16:57:02,824 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43397,1684947421602; forceNewPlan=false, retain=false 2023-05-24 16:57:02,976 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d34c9fdcfa19fb58cb6981fec1d08c70, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:02,976 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947422976"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947422976"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947422976"}]},"ts":"1684947422976"} 2023-05-24 16:57:02,980 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:03,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:03,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d34c9fdcfa19fb58cb6981fec1d08c70, NAME => 'TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:57:03,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:03,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,146 INFO [StoreOpener-d34c9fdcfa19fb58cb6981fec1d08c70-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,149 DEBUG [StoreOpener-d34c9fdcfa19fb58cb6981fec1d08c70-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info 2023-05-24 16:57:03,149 DEBUG [StoreOpener-d34c9fdcfa19fb58cb6981fec1d08c70-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info 2023-05-24 16:57:03,150 INFO [StoreOpener-d34c9fdcfa19fb58cb6981fec1d08c70-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d34c9fdcfa19fb58cb6981fec1d08c70 columnFamilyName info 2023-05-24 16:57:03,151 INFO [StoreOpener-d34c9fdcfa19fb58cb6981fec1d08c70-1] regionserver.HStore(310): Store=d34c9fdcfa19fb58cb6981fec1d08c70/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:03,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:03,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:57:03,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d34c9fdcfa19fb58cb6981fec1d08c70; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872329, jitterRate=0.10922414064407349}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:57:03,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:03,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70., pid=11, masterSystemTime=1684947423135 2023-05-24 16:57:03,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:03,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:03,168 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d34c9fdcfa19fb58cb6981fec1d08c70, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:03,168 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947423167"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947423167"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947423167"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947423167"}]},"ts":"1684947423167"} 2023-05-24 16:57:03,173 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 16:57:03,173 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602 in 190 msec 2023-05-24 16:57:03,176 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 16:57:03,176 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, ASSIGN in 351 msec 2023-05-24 16:57:03,177 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:57:03,177 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947423177"}]},"ts":"1684947423177"} 2023-05-24 16:57:03,179 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-24 16:57:03,181 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:57:03,183 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 399 msec 2023-05-24 16:57:05,851 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:57:07,891 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 16:57:07,892 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 16:57:07,893 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-24 16:57:12,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33229] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 16:57:12,793 INFO [Listener at localhost.localdomain/40087] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-24 16:57:12,796 DEBUG [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-24 16:57:12,796 DEBUG [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:12,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:12,812 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:57:12,822 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/074d58cdeb404768af46f6ccbb2defdd 2023-05-24 16:57:12,830 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/074d58cdeb404768af46f6ccbb2defdd as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd 2023-05-24 16:57:12,835 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd, entries=7, sequenceid=11, filesize=12.1 K 2023-05-24 16:57:12,836 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for d34c9fdcfa19fb58cb6981fec1d08c70 in 24ms, sequenceid=11, compaction requested=false 2023-05-24 16:57:12,837 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:12,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:12,837 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-24 16:57:12,849 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/2863323f39d345919a79991a34ca9a3c 2023-05-24 16:57:12,856 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/2863323f39d345919a79991a34ca9a3c as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c 2023-05-24 16:57:12,861 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c, entries=17, sequenceid=31, filesize=22.6 K 2023-05-24 16:57:12,862 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=8.41 KB/8608 for d34c9fdcfa19fb58cb6981fec1d08c70 in 25ms, sequenceid=31, compaction requested=false 2023-05-24 16:57:12,862 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:12,862 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=34.8 K, sizeToCheck=16.0 K 2023-05-24 16:57:12,862 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:12,863 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c because midkey is the same as first or last row 2023-05-24 16:57:14,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:14,852 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-24 16:57:14,867 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/41c4a157e0534a6c8896b56699db1a5c 2023-05-24 16:57:14,875 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/41c4a157e0534a6c8896b56699db1a5c as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c 2023-05-24 16:57:14,883 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c, entries=9, sequenceid=43, filesize=14.2 K 2023-05-24 16:57:14,884 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=18.91 KB/19368 for d34c9fdcfa19fb58cb6981fec1d08c70 in 32ms, sequenceid=43, compaction requested=true 2023-05-24 16:57:14,885 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:14,885 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-05-24 16:57:14,885 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:14,885 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c because midkey is the same as first or last row 2023-05-24 16:57:14,885 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:14,885 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:14,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:14,886 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-24 16:57:14,887 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:14,888 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d34c9fdcfa19fb58cb6981fec1d08c70/info is initiating minor compaction (all files) 2023-05-24 16:57:14,888 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d34c9fdcfa19fb58cb6981fec1d08c70/info in TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:14,888 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp, totalSize=49.0 K 2023-05-24 16:57:14,889 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 074d58cdeb404768af46f6ccbb2defdd, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684947432800 2023-05-24 16:57:14,890 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 2863323f39d345919a79991a34ca9a3c, keycount=17, bloomtype=ROW, size=22.6 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1684947432813 2023-05-24 16:57:14,890 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 41c4a157e0534a6c8896b56699db1a5c, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1684947432838 2023-05-24 16:57:14,907 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=65 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/86fe6ed31e384fc09b08c854cef9fe08 2023-05-24 16:57:14,914 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:57:14,915 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/86fe6ed31e384fc09b08c854cef9fe08 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08 2023-05-24 16:57:14,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 71 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947444914, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:14,915 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d34c9fdcfa19fb58cb6981fec1d08c70#info#compaction#31 average throughput is 16.93 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:14,922 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08, entries=19, sequenceid=65, filesize=24.7 K 2023-05-24 16:57:14,923 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for d34c9fdcfa19fb58cb6981fec1d08c70 in 37ms, sequenceid=65, compaction requested=false 2023-05-24 16:57:14,923 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:14,923 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=73.7 K, sizeToCheck=16.0 K 2023-05-24 16:57:14,923 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:14,923 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08 because midkey is the same as first or last row 2023-05-24 16:57:14,939 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/3f9ec9823d3349348f6f348f0fe6616b as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b 2023-05-24 16:57:14,947 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d34c9fdcfa19fb58cb6981fec1d08c70/info of d34c9fdcfa19fb58cb6981fec1d08c70 into 3f9ec9823d3349348f6f348f0fe6616b(size=39.6 K), total size for store is 64.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:14,947 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:14,947 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70., storeName=d34c9fdcfa19fb58cb6981fec1d08c70/info, priority=13, startTime=1684947434885; duration=0sec 2023-05-24 16:57:14,951 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=64.4 K, sizeToCheck=16.0 K 2023-05-24 16:57:14,951 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:14,951 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b because midkey is the same as first or last row 2023-05-24 16:57:14,951 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,023 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-24 16:57:25,040 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/0c50f6612fb64c50be64f2e0b3510c61 2023-05-24 16:57:25,047 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/0c50f6612fb64c50be64f2e0b3510c61 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61 2023-05-24 16:57:25,052 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61, entries=11, sequenceid=80, filesize=16.3 K 2023-05-24 16:57:25,053 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for d34c9fdcfa19fb58cb6981fec1d08c70 in 30ms, sequenceid=80, compaction requested=true 2023-05-24 16:57:25,053 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:25,053 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-05-24 16:57:25,053 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:25,053 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b because midkey is the same as first or last row 2023-05-24 16:57:25,053 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,053 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:25,054 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82610 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:25,054 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d34c9fdcfa19fb58cb6981fec1d08c70/info is initiating minor compaction (all files) 2023-05-24 16:57:25,054 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d34c9fdcfa19fb58cb6981fec1d08c70/info in TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:25,054 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp, totalSize=80.7 K 2023-05-24 16:57:25,055 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 3f9ec9823d3349348f6f348f0fe6616b, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1684947432800 2023-05-24 16:57:25,055 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 86fe6ed31e384fc09b08c854cef9fe08, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=65, earliestPutTs=1684947434853 2023-05-24 16:57:25,055 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 0c50f6612fb64c50be64f2e0b3510c61, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1684947434888 2023-05-24 16:57:25,069 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d34c9fdcfa19fb58cb6981fec1d08c70#info#compaction#33 average throughput is 16.16 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:25,083 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/3fa79f899c2141bfb7200fd5d9758810 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810 2023-05-24 16:57:25,088 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d34c9fdcfa19fb58cb6981fec1d08c70/info of d34c9fdcfa19fb58cb6981fec1d08c70 into 3fa79f899c2141bfb7200fd5d9758810(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:25,088 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:25,088 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70., storeName=d34c9fdcfa19fb58cb6981fec1d08c70/info, priority=13, startTime=1684947445053; duration=0sec 2023-05-24 16:57:25,088 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-05-24 16:57:25,088 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 16:57:25,089 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,089 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,090 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33229] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,43397,1684947421602, parent={ENCODED => d34c9fdcfa19fb58cb6981fec1d08c70, NAME => 'TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-24 16:57:25,096 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33229] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,101 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33229] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=d34c9fdcfa19fb58cb6981fec1d08c70, daughterA=61e4c98c505e89bd0e9298f2ea550855, daughterB=d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,102 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=d34c9fdcfa19fb58cb6981fec1d08c70, daughterA=61e4c98c505e89bd0e9298f2ea550855, daughterB=d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,102 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=d34c9fdcfa19fb58cb6981fec1d08c70, daughterA=61e4c98c505e89bd0e9298f2ea550855, daughterB=d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,102 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=d34c9fdcfa19fb58cb6981fec1d08c70, daughterA=61e4c98c505e89bd0e9298f2ea550855, daughterB=d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, UNASSIGN}] 2023-05-24 16:57:25,111 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, UNASSIGN 2023-05-24 16:57:25,112 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d34c9fdcfa19fb58cb6981fec1d08c70, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,112 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947445112"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947445112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947445112"}]},"ts":"1684947445112"} 2023-05-24 16:57:25,114 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:25,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d34c9fdcfa19fb58cb6981fec1d08c70, disabling compactions & flushes 2023-05-24 16:57:25,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:25,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:25,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. after waiting 0 ms 2023-05-24 16:57:25,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:25,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing d34c9fdcfa19fb58cb6981fec1d08c70 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:57:25,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/227ee3d9068e45e192beb4d6eee0c22e 2023-05-24 16:57:25,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.tmp/info/227ee3d9068e45e192beb4d6eee0c22e as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/227ee3d9068e45e192beb4d6eee0c22e 2023-05-24 16:57:25,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/227ee3d9068e45e192beb4d6eee0c22e, entries=1, sequenceid=85, filesize=5.8 K 2023-05-24 16:57:25,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d34c9fdcfa19fb58cb6981fec1d08c70 in 24ms, sequenceid=85, compaction requested=false 2023-05-24 16:57:25,301 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61] to archive 2023-05-24 16:57:25,302 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 16:57:25,304 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/074d58cdeb404768af46f6ccbb2defdd 2023-05-24 16:57:25,305 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/2863323f39d345919a79991a34ca9a3c 2023-05-24 16:57:25,306 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3f9ec9823d3349348f6f348f0fe6616b 2023-05-24 16:57:25,307 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/41c4a157e0534a6c8896b56699db1a5c 2023-05-24 16:57:25,309 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/86fe6ed31e384fc09b08c854cef9fe08 2023-05-24 16:57:25,310 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/0c50f6612fb64c50be64f2e0b3510c61 2023-05-24 16:57:25,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-05-24 16:57:25,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. 2023-05-24 16:57:25,321 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d34c9fdcfa19fb58cb6981fec1d08c70: 2023-05-24 16:57:25,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,324 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d34c9fdcfa19fb58cb6981fec1d08c70, regionState=CLOSED 2023-05-24 16:57:25,324 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947445324"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947445324"}]},"ts":"1684947445324"} 2023-05-24 16:57:25,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-24 16:57:25,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure d34c9fdcfa19fb58cb6981fec1d08c70, server=jenkins-hbase20.apache.org,43397,1684947421602 in 212 msec 2023-05-24 16:57:25,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-24 16:57:25,332 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d34c9fdcfa19fb58cb6981fec1d08c70, UNASSIGN in 220 msec 2023-05-24 16:57:25,344 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=d34c9fdcfa19fb58cb6981fec1d08c70, threads=2 2023-05-24 16:57:25,345 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/227ee3d9068e45e192beb4d6eee0c22e for region: d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,345 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810 for region: d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,356 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/227ee3d9068e45e192beb4d6eee0c22e, top=true 2023-05-24 16:57:25,366 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/.splits/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e for child: d851bfc9d4267e9a867e7eaba3161e76, parent: d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,366 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/227ee3d9068e45e192beb4d6eee0c22e for region: d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,379 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810 for region: d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:57:25,379 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region d34c9fdcfa19fb58cb6981fec1d08c70 Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-05-24 16:57:25,402 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-24 16:57:25,404 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-24 16:57:25,406 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684947445406"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1684947445406"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1684947445406"}]},"ts":"1684947445406"} 2023-05-24 16:57:25,406 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445406"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947445406"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947445406"}]},"ts":"1684947445406"} 2023-05-24 16:57:25,406 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445406"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947445406"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947445406"}]},"ts":"1684947445406"} 2023-05-24 16:57:25,446 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43397] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-24 16:57:25,446 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-24 16:57:25,446 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-24 16:57:25,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=61e4c98c505e89bd0e9298f2ea550855, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d851bfc9d4267e9a867e7eaba3161e76, ASSIGN}] 2023-05-24 16:57:25,456 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=61e4c98c505e89bd0e9298f2ea550855, ASSIGN 2023-05-24 16:57:25,456 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d851bfc9d4267e9a867e7eaba3161e76, ASSIGN 2023-05-24 16:57:25,457 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=61e4c98c505e89bd0e9298f2ea550855, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,43397,1684947421602; forceNewPlan=false, retain=false 2023-05-24 16:57:25,457 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d851bfc9d4267e9a867e7eaba3161e76, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,43397,1684947421602; forceNewPlan=false, retain=false 2023-05-24 16:57:25,457 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/info/50114def825841bca90a30840d2585f9 2023-05-24 16:57:25,469 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/table/b6d8ce9db9074321862560002cb8fcc5 2023-05-24 16:57:25,474 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/info/50114def825841bca90a30840d2585f9 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info/50114def825841bca90a30840d2585f9 2023-05-24 16:57:25,479 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info/50114def825841bca90a30840d2585f9, entries=29, sequenceid=17, filesize=8.6 K 2023-05-24 16:57:25,480 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/table/b6d8ce9db9074321862560002cb8fcc5 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/table/b6d8ce9db9074321862560002cb8fcc5 2023-05-24 16:57:25,485 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/table/b6d8ce9db9074321862560002cb8fcc5, entries=4, sequenceid=17, filesize=4.8 K 2023-05-24 16:57:25,485 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 39ms, sequenceid=17, compaction requested=false 2023-05-24 16:57:25,486 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 16:57:25,610 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=61e4c98c505e89bd0e9298f2ea550855, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,610 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=d851bfc9d4267e9a867e7eaba3161e76, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,610 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445610"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947445610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947445610"}]},"ts":"1684947445610"} 2023-05-24 16:57:25,611 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445610"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947445610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947445610"}]},"ts":"1684947445610"} 2023-05-24 16:57:25,615 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 61e4c98c505e89bd0e9298f2ea550855, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:25,617 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602}] 2023-05-24 16:57:25,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:57:25,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61e4c98c505e89bd0e9298f2ea550855, NAME => 'TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-24 16:57:25,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:25,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,775 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,777 INFO [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,779 DEBUG [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info 2023-05-24 16:57:25,779 DEBUG [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info 2023-05-24 16:57:25,780 INFO [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61e4c98c505e89bd0e9298f2ea550855 columnFamilyName info 2023-05-24 16:57:25,793 DEBUG [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-bottom 2023-05-24 16:57:25,793 INFO [StoreOpener-61e4c98c505e89bd0e9298f2ea550855-1] regionserver.HStore(310): Store=61e4c98c505e89bd0e9298f2ea550855/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:25,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:57:25,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 61e4c98c505e89bd0e9298f2ea550855; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=749921, jitterRate=-0.046426281332969666}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:57:25,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 61e4c98c505e89bd0e9298f2ea550855: 2023-05-24 16:57:25,800 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855., pid=17, masterSystemTime=1684947445768 2023-05-24 16:57:25,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 16:57:25,801 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-24 16:57:25,802 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:57:25,802 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): 61e4c98c505e89bd0e9298f2ea550855/info is initiating minor compaction (all files) 2023-05-24 16:57:25,802 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 61e4c98c505e89bd0e9298f2ea550855/info in TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:57:25,802 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-bottom] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/.tmp, totalSize=71.4 K 2023-05-24 16:57:25,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:57:25,803 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1684947432800 2023-05-24 16:57:25,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:57:25,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:25,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d851bfc9d4267e9a867e7eaba3161e76, NAME => 'TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-24 16:57:25,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,804 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=61e4c98c505e89bd0e9298f2ea550855, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:57:25,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,804 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445804"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947445804"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947445804"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947445804"}]},"ts":"1684947445804"} 2023-05-24 16:57:25,805 INFO [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,806 DEBUG [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info 2023-05-24 16:57:25,806 DEBUG [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info 2023-05-24 16:57:25,807 INFO [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d851bfc9d4267e9a867e7eaba3161e76 columnFamilyName info 2023-05-24 16:57:25,808 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-24 16:57:25,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 61e4c98c505e89bd0e9298f2ea550855, server=jenkins-hbase20.apache.org,43397,1684947421602 in 191 msec 2023-05-24 16:57:25,810 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=61e4c98c505e89bd0e9298f2ea550855, ASSIGN in 355 msec 2023-05-24 16:57:25,812 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): 61e4c98c505e89bd0e9298f2ea550855#info#compaction#37 average throughput is 15.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:25,816 DEBUG [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-top 2023-05-24 16:57:25,824 DEBUG [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e 2023-05-24 16:57:25,824 INFO [StoreOpener-d851bfc9d4267e9a867e7eaba3161e76-1] regionserver.HStore(310): Store=d851bfc9d4267e9a867e7eaba3161e76/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:57:25,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,827 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/.tmp/info/4c8cdb0f9de6445bb5be9d03943b7621 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/4c8cdb0f9de6445bb5be9d03943b7621 2023-05-24 16:57:25,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:25,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d851bfc9d4267e9a867e7eaba3161e76; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=784713, jitterRate=-0.0021866261959075928}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:57:25,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:25,830 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., pid=18, masterSystemTime=1684947445768 2023-05-24 16:57:25,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,831 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-05-24 16:57:25,833 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:25,833 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:57:25,833 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:25,833 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-top, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=77.2 K 2023-05-24 16:57:25,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:25,833 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:25,834 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting 3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1684947432800 2023-05-24 16:57:25,834 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=d851bfc9d4267e9a867e7eaba3161e76, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:25,834 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684947445834"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947445834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947445834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947445834"}]},"ts":"1684947445834"} 2023-05-24 16:57:25,834 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 61e4c98c505e89bd0e9298f2ea550855/info of 61e4c98c505e89bd0e9298f2ea550855 into 4c8cdb0f9de6445bb5be9d03943b7621(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:25,835 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1684947445024 2023-05-24 16:57:25,835 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 61e4c98c505e89bd0e9298f2ea550855: 2023-05-24 16:57:25,835 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855., storeName=61e4c98c505e89bd0e9298f2ea550855/info, priority=15, startTime=1684947445800; duration=0sec 2023-05-24 16:57:25,835 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:25,837 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-24 16:57:25,837 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 in 219 msec 2023-05-24 16:57:25,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-24 16:57:25,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d851bfc9d4267e9a867e7eaba3161e76, ASSIGN in 383 msec 2023-05-24 16:57:25,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=d34c9fdcfa19fb58cb6981fec1d08c70, daughterA=61e4c98c505e89bd0e9298f2ea550855, daughterB=d851bfc9d4267e9a867e7eaba3161e76 in 743 msec 2023-05-24 16:57:25,842 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#38 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:25,853 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/5c6baf607a1f47c28fde22701fd5132b as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/5c6baf607a1f47c28fde22701fd5132b 2023-05-24 16:57:25,860 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into 5c6baf607a1f47c28fde22701fd5132b(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:25,860 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:25,860 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=14, startTime=1684947445830; duration=0sec 2023-05-24 16:57:25,860 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:27,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947457028, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1684947422781.d34c9fdcfa19fb58cb6981fec1d08c70. is not online on jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:30,891 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:57:37,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:37,154 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:57:37,165 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/4e332edeb6df488fba7116ad42b3bc22 2023-05-24 16:57:37,172 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/4e332edeb6df488fba7116ad42b3bc22 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22 2023-05-24 16:57:37,177 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22, entries=7, sequenceid=99, filesize=12.1 K 2023-05-24 16:57:37,178 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for d851bfc9d4267e9a867e7eaba3161e76 in 25ms, sequenceid=99, compaction requested=false 2023-05-24 16:57:37,178 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:37,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:37,178 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-24 16:57:37,191 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=120 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/7de31d413d8941869a731e54926040fe 2023-05-24 16:57:37,196 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/7de31d413d8941869a731e54926040fe as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe 2023-05-24 16:57:37,202 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe, entries=18, sequenceid=120, filesize=23.7 K 2023-05-24 16:57:37,203 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=7.36 KB/7532 for d851bfc9d4267e9a867e7eaba3161e76 in 24ms, sequenceid=120, compaction requested=true 2023-05-24 16:57:37,203 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:37,203 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 16:57:37,203 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:37,204 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 44914 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:37,204 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:57:37,204 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:37,204 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/5c6baf607a1f47c28fde22701fd5132b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=43.9 K 2023-05-24 16:57:37,204 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 5c6baf607a1f47c28fde22701fd5132b, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1684947434912 2023-05-24 16:57:37,205 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 4e332edeb6df488fba7116ad42b3bc22, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1684947457144 2023-05-24 16:57:37,205 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 7de31d413d8941869a731e54926040fe, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1684947457154 2023-05-24 16:57:37,214 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#41 average throughput is 28.73 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:37,224 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/b5745942a2404d1a908de22d001a416d as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/b5745942a2404d1a908de22d001a416d 2023-05-24 16:57:37,230 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into b5745942a2404d1a908de22d001a416d(size=34.5 K), total size for store is 34.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:37,230 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:37,230 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947457203; duration=0sec 2023-05-24 16:57:37,230 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:39,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:39,194 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-24 16:57:39,211 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/e2300d62fa744fdb8577b7d5b38fb05e 2023-05-24 16:57:39,217 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/e2300d62fa744fdb8577b7d5b38fb05e as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e 2023-05-24 16:57:39,223 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e, entries=8, sequenceid=132, filesize=13.2 K 2023-05-24 16:57:39,224 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=17.86 KB/18292 for d851bfc9d4267e9a867e7eaba3161e76 in 30ms, sequenceid=132, compaction requested=false 2023-05-24 16:57:39,224 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:39,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:39,225 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-24 16:57:39,236 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=154 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/92210c166362460a8df2332328603076 2023-05-24 16:57:39,240 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:57:39,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947469240, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:39,246 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/92210c166362460a8df2332328603076 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076 2023-05-24 16:57:39,251 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076, entries=19, sequenceid=154, filesize=24.8 K 2023-05-24 16:57:39,252 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for d851bfc9d4267e9a867e7eaba3161e76 in 27ms, sequenceid=154, compaction requested=true 2023-05-24 16:57:39,252 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:39,252 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:39,252 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:39,253 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 74176 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:39,253 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:57:39,253 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:39,253 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/b5745942a2404d1a908de22d001a416d, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=72.4 K 2023-05-24 16:57:39,254 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting b5745942a2404d1a908de22d001a416d, keycount=28, bloomtype=ROW, size=34.5 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1684947434912 2023-05-24 16:57:39,254 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting e2300d62fa744fdb8577b7d5b38fb05e, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1684947457178 2023-05-24 16:57:39,255 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 92210c166362460a8df2332328603076, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=1684947459196 2023-05-24 16:57:39,266 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#44 average throughput is 56.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:39,283 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/c86de4e4c5064318a00d92d50627d5b2 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c86de4e4c5064318a00d92d50627d5b2 2023-05-24 16:57:39,288 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into c86de4e4c5064318a00d92d50627d5b2(size=63.1 K), total size for store is 63.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:39,288 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:39,288 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947459252; duration=0sec 2023-05-24 16:57:39,288 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:47,048 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-24 16:57:47,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=11, reused chunk count=35, reuseRatio=76.09% 2023-05-24 16:57:49,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:49,278 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-24 16:57:49,294 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/ec32f462bef6436da4358af3b42f950d 2023-05-24 16:57:49,305 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/ec32f462bef6436da4358af3b42f950d as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d 2023-05-24 16:57:49,313 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d, entries=11, sequenceid=169, filesize=16.3 K 2023-05-24 16:57:49,314 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for d851bfc9d4267e9a867e7eaba3161e76 in 36ms, sequenceid=169, compaction requested=false 2023-05-24 16:57:49,315 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:51,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:51,301 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:57:51,315 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=179 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/1a34444983d0485cb1e6d13227dc0370 2023-05-24 16:57:51,321 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/1a34444983d0485cb1e6d13227dc0370 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370 2023-05-24 16:57:51,327 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370, entries=7, sequenceid=179, filesize=12.1 K 2023-05-24 16:57:51,327 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=15.76 KB/16140 for d851bfc9d4267e9a867e7eaba3161e76 in 27ms, sequenceid=179, compaction requested=true 2023-05-24 16:57:51,327 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:51,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:51,327 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:51,328 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:51,328 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-24 16:57:51,329 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93734 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:51,329 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:57:51,329 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:51,329 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c86de4e4c5064318a00d92d50627d5b2, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=91.5 K 2023-05-24 16:57:51,330 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting c86de4e4c5064318a00d92d50627d5b2, keycount=55, bloomtype=ROW, size=63.1 K, encoding=NONE, compression=NONE, seqNum=154, earliestPutTs=1684947434912 2023-05-24 16:57:51,330 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting ec32f462bef6436da4358af3b42f950d, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1684947459226 2023-05-24 16:57:51,330 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 1a34444983d0485cb1e6d13227dc0370, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1684947469279 2023-05-24 16:57:51,343 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=198 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2ed46dcaead5411b92f0ece8a4dbe64b 2023-05-24 16:57:51,346 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#48 average throughput is 74.91 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:51,349 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2ed46dcaead5411b92f0ece8a4dbe64b as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b 2023-05-24 16:57:51,357 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b, entries=16, sequenceid=198, filesize=21.6 K 2023-05-24 16:57:51,358 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=10.51 KB/10760 for d851bfc9d4267e9a867e7eaba3161e76 in 30ms, sequenceid=198, compaction requested=false 2023-05-24 16:57:51,358 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:51,361 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/4e3233ded7ae40aeb82e6239dc6f4570 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e3233ded7ae40aeb82e6239dc6f4570 2023-05-24 16:57:51,366 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into 4e3233ded7ae40aeb82e6239dc6f4570(size=82.2 K), total size for store is 103.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:51,366 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:51,367 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947471327; duration=0sec 2023-05-24 16:57:51,367 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:53,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:57:53,343 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-24 16:57:53,362 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/c063224d4f6748d0b1e99d1e5b1827a9 2023-05-24 16:57:53,368 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/c063224d4f6748d0b1e99d1e5b1827a9 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9 2023-05-24 16:57:53,371 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:57:53,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 196 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947483371, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:57:53,373 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9, entries=11, sequenceid=213, filesize=16.3 K 2023-05-24 16:57:53,374 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=18.91 KB/19368 for d851bfc9d4267e9a867e7eaba3161e76 in 31ms, sequenceid=213, compaction requested=true 2023-05-24 16:57:53,374 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:53,374 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 16:57:53,374 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:57:53,375 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 123035 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:57:53,375 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:57:53,375 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:57:53,375 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e3233ded7ae40aeb82e6239dc6f4570, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=120.2 K 2023-05-24 16:57:53,376 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 4e3233ded7ae40aeb82e6239dc6f4570, keycount=73, bloomtype=ROW, size=82.2 K, encoding=NONE, compression=NONE, seqNum=179, earliestPutTs=1684947434912 2023-05-24 16:57:53,376 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 2ed46dcaead5411b92f0ece8a4dbe64b, keycount=16, bloomtype=ROW, size=21.6 K, encoding=NONE, compression=NONE, seqNum=198, earliestPutTs=1684947471302 2023-05-24 16:57:53,376 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting c063224d4f6748d0b1e99d1e5b1827a9, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1684947471328 2023-05-24 16:57:53,386 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#50 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:57:53,398 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/654b467854aa421083f6912a145f3897 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/654b467854aa421083f6912a145f3897 2023-05-24 16:57:53,404 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into 654b467854aa421083f6912a145f3897(size=110.7 K), total size for store is 110.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:57:53,404 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:57:53,404 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947473374; duration=0sec 2023-05-24 16:57:53,404 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:57:53,953 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 16:58:03,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:03,380 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-24 16:58:03,393 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=236 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/21096b128dd74992809e6430c5d19ef8 2023-05-24 16:58:03,399 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:58:03,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 209 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947493399, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:03,400 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/21096b128dd74992809e6430c5d19ef8 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8 2023-05-24 16:58:03,406 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8, entries=19, sequenceid=236, filesize=24.8 K 2023-05-24 16:58:03,407 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for d851bfc9d4267e9a867e7eaba3161e76 in 27ms, sequenceid=236, compaction requested=false 2023-05-24 16:58:03,407 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:13,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:13,437 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=11.56 KB heapSize=12.63 KB 2023-05-24 16:58:13,451 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=11.56 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/d22a5981b87449a588c20ad88414091c 2023-05-24 16:58:13,456 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/d22a5981b87449a588c20ad88414091c as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c 2023-05-24 16:58:13,462 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c, entries=11, sequenceid=250, filesize=16.3 K 2023-05-24 16:58:13,463 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~11.56 KB/11836, heapSize ~12.61 KB/12912, currentSize=1.05 KB/1076 for d851bfc9d4267e9a867e7eaba3161e76 in 26ms, sequenceid=250, compaction requested=true 2023-05-24 16:58:13,463 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:13,463 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 16:58:13,463 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:58:13,464 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155485 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:58:13,464 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:58:13,464 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:13,464 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/654b467854aa421083f6912a145f3897, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=151.8 K 2023-05-24 16:58:13,465 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting 654b467854aa421083f6912a145f3897, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1684947434912 2023-05-24 16:58:13,465 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting 21096b128dd74992809e6430c5d19ef8, keycount=19, bloomtype=ROW, size=24.8 K, encoding=NONE, compression=NONE, seqNum=236, earliestPutTs=1684947473345 2023-05-24 16:58:13,466 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting d22a5981b87449a588c20ad88414091c, keycount=11, bloomtype=ROW, size=16.3 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1684947483381 2023-05-24 16:58:13,481 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#53 average throughput is 44.47 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:58:13,492 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/8592c13c58b847a295566309e03b6f57 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/8592c13c58b847a295566309e03b6f57 2023-05-24 16:58:13,498 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into 8592c13c58b847a295566309e03b6f57(size=142.6 K), total size for store is 142.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:58:13,498 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:13,498 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947493463; duration=0sec 2023-05-24 16:58:13,498 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:58:15,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:15,458 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 16:58:15,472 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=261 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/0565d77cbc52413fa11d325f16f4edd8 2023-05-24 16:58:15,478 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/0565d77cbc52413fa11d325f16f4edd8 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8 2023-05-24 16:58:15,484 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8, entries=7, sequenceid=261, filesize=12.1 K 2023-05-24 16:58:15,485 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for d851bfc9d4267e9a867e7eaba3161e76 in 27ms, sequenceid=261, compaction requested=false 2023-05-24 16:58:15,485 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:15,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:15,486 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-24 16:58:15,495 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=282 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/d4b194b8eeec45c986fdbf8c115c8791 2023-05-24 16:58:15,501 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/d4b194b8eeec45c986fdbf8c115c8791 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791 2023-05-24 16:58:15,505 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791, entries=18, sequenceid=282, filesize=23.7 K 2023-05-24 16:58:15,506 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=8.41 KB/8608 for d851bfc9d4267e9a867e7eaba3161e76 in 20ms, sequenceid=282, compaction requested=true 2023-05-24 16:58:15,506 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:15,506 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 16:58:15,506 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:58:15,507 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 182765 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:58:15,507 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:58:15,507 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:15,507 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/8592c13c58b847a295566309e03b6f57, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=178.5 K 2023-05-24 16:58:15,508 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting 8592c13c58b847a295566309e03b6f57, keycount=130, bloomtype=ROW, size=142.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1684947434912 2023-05-24 16:58:15,508 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting 0565d77cbc52413fa11d325f16f4edd8, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=261, earliestPutTs=1684947493438 2023-05-24 16:58:15,509 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] compactions.Compactor(207): Compacting d4b194b8eeec45c986fdbf8c115c8791, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=282, earliestPutTs=1684947495459 2023-05-24 16:58:15,520 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#56 average throughput is 159.05 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:58:15,532 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/ac357539e5e243268de8c3e8174d405e as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ac357539e5e243268de8c3e8174d405e 2023-05-24 16:58:15,537 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into ac357539e5e243268de8c3e8174d405e(size=169.1 K), total size for store is 169.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:58:15,538 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:15,538 INFO [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947495506; duration=0sec 2023-05-24 16:58:15,538 DEBUG [RS:0;jenkins-hbase20:43397-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:58:17,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:17,497 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-24 16:58:17,515 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2adca9bfdcb345a99cdf2e23ff944131 2023-05-24 16:58:17,521 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2adca9bfdcb345a99cdf2e23ff944131 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131 2023-05-24 16:58:17,523 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:58:17,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 266 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947507523, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:17,526 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131, entries=9, sequenceid=295, filesize=14.2 K 2023-05-24 16:58:17,527 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=21.02 KB/21520 for d851bfc9d4267e9a867e7eaba3161e76 in 30ms, sequenceid=295, compaction requested=false 2023-05-24 16:58:17,527 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:27,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:27,581 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-24 16:58:27,597 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 16:58:27,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:59626 deadline: 1684947517596, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=d851bfc9d4267e9a867e7eaba3161e76, server=jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:27,598 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=319 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2b981be8e0ab437fbb8f2cb2bfc56f4a 2023-05-24 16:58:27,614 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/2b981be8e0ab437fbb8f2cb2bfc56f4a as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a 2023-05-24 16:58:27,623 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a, entries=21, sequenceid=319, filesize=26.9 K 2023-05-24 16:58:27,624 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=8.41 KB/8608 for d851bfc9d4267e9a867e7eaba3161e76 in 43ms, sequenceid=319, compaction requested=true 2023-05-24 16:58:27,624 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:27,625 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 16:58:27,625 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:58:27,627 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 215245 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 16:58:27,628 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1912): d851bfc9d4267e9a867e7eaba3161e76/info is initiating minor compaction (all files) 2023-05-24 16:58:27,628 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d851bfc9d4267e9a867e7eaba3161e76/info in TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:27,628 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ac357539e5e243268de8c3e8174d405e, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a] into tmpdir=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp, totalSize=210.2 K 2023-05-24 16:58:27,630 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting ac357539e5e243268de8c3e8174d405e, keycount=155, bloomtype=ROW, size=169.1 K, encoding=NONE, compression=NONE, seqNum=282, earliestPutTs=1684947434912 2023-05-24 16:58:27,631 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 2adca9bfdcb345a99cdf2e23ff944131, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1684947495486 2023-05-24 16:58:27,631 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] compactions.Compactor(207): Compacting 2b981be8e0ab437fbb8f2cb2bfc56f4a, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=319, earliestPutTs=1684947497498 2023-05-24 16:58:27,662 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] throttle.PressureAwareThroughputController(145): d851bfc9d4267e9a867e7eaba3161e76#info#compaction#59 average throughput is 63.28 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 16:58:27,686 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/e2f01a1f4ea547109c481af79873792e as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2f01a1f4ea547109c481af79873792e 2023-05-24 16:58:27,693 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d851bfc9d4267e9a867e7eaba3161e76/info of d851bfc9d4267e9a867e7eaba3161e76 into e2f01a1f4ea547109c481af79873792e(size=200.9 K), total size for store is 200.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 16:58:27,693 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:27,693 INFO [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76., storeName=d851bfc9d4267e9a867e7eaba3161e76/info, priority=13, startTime=1684947507624; duration=0sec 2023-05-24 16:58:27,693 DEBUG [RS:0;jenkins-hbase20:43397-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 16:58:37,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43397] regionserver.HRegion(9158): Flush requested on d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:37,685 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-24 16:58:37,693 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=332 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/51c0eb7f68984c5d90026a7fff43eb26 2023-05-24 16:58:37,700 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/51c0eb7f68984c5d90026a7fff43eb26 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/51c0eb7f68984c5d90026a7fff43eb26 2023-05-24 16:58:37,705 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/51c0eb7f68984c5d90026a7fff43eb26, entries=9, sequenceid=332, filesize=14.2 K 2023-05-24 16:58:37,706 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=1.05 KB/1076 for d851bfc9d4267e9a867e7eaba3161e76 in 21ms, sequenceid=332, compaction requested=false 2023-05-24 16:58:37,706 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:39,686 INFO [Listener at localhost.localdomain/40087] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-24 16:58:39,717 INFO [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947422026 with entries=316, filesize=309.16 KB; new WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519687 2023-05-24 16:58:39,717 DEBUG [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45905,DS-4640d682-4aaf-42a1-971f-44c48e99f33c,DISK], DatanodeInfoWithStorage[127.0.0.1:37073,DS-173d2ba4-c529-4713-b892-0f68ecd170c3,DISK]] 2023-05-24 16:58:39,718 DEBUG [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947422026 is not closed yet, will try archiving it next time 2023-05-24 16:58:39,726 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2745): Flushing 8a4d043522699af70c775e2ba14b314d 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:58:39,739 INFO [Listener at localhost.localdomain/40087] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/.tmp/info/1653b853ea3348b098434dd3e389f061 2023-05-24 16:58:39,745 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/.tmp/info/1653b853ea3348b098434dd3e389f061 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/info/1653b853ea3348b098434dd3e389f061 2023-05-24 16:58:39,751 INFO [Listener at localhost.localdomain/40087] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/info/1653b853ea3348b098434dd3e389f061, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 16:58:39,753 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8a4d043522699af70c775e2ba14b314d in 27ms, sequenceid=6, compaction requested=false 2023-05-24 16:58:39,754 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegion(2446): Flush status journal for 8a4d043522699af70c775e2ba14b314d: 2023-05-24 16:58:39,754 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegion(2446): Flush status journal for 61e4c98c505e89bd0e9298f2ea550855: 2023-05-24 16:58:39,754 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-24 16:58:39,765 INFO [Listener at localhost.localdomain/40087] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/info/c087977138824481a398e2ee597720a0 2023-05-24 16:58:39,773 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/.tmp/info/c087977138824481a398e2ee597720a0 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info/c087977138824481a398e2ee597720a0 2023-05-24 16:58:39,779 INFO [Listener at localhost.localdomain/40087] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/info/c087977138824481a398e2ee597720a0, entries=16, sequenceid=24, filesize=7.0 K 2023-05-24 16:58:39,780 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 26ms, sequenceid=24, compaction requested=false 2023-05-24 16:58:39,780 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 16:58:39,781 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2745): Flushing d851bfc9d4267e9a867e7eaba3161e76 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 16:58:39,793 INFO [Listener at localhost.localdomain/40087] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=336 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/e28973750aee43b0a2b38d9c65b03935 2023-05-24 16:58:39,798 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/.tmp/info/e28973750aee43b0a2b38d9c65b03935 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e28973750aee43b0a2b38d9c65b03935 2023-05-24 16:58:39,802 INFO [Listener at localhost.localdomain/40087] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e28973750aee43b0a2b38d9c65b03935, entries=1, sequenceid=336, filesize=5.8 K 2023-05-24 16:58:39,803 INFO [Listener at localhost.localdomain/40087] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for d851bfc9d4267e9a867e7eaba3161e76 in 22ms, sequenceid=336, compaction requested=true 2023-05-24 16:58:39,803 DEBUG [Listener at localhost.localdomain/40087] regionserver.HRegion(2446): Flush status journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:39,809 INFO [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519687 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519803 2023-05-24 16:58:39,809 DEBUG [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37073,DS-173d2ba4-c529-4713-b892-0f68ecd170c3,DISK], DatanodeInfoWithStorage[127.0.0.1:45905,DS-4640d682-4aaf-42a1-971f-44c48e99f33c,DISK]] 2023-05-24 16:58:39,809 DEBUG [Listener at localhost.localdomain/40087] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519687 is not closed yet, will try archiving it next time 2023-05-24 16:58:39,810 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947422026 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947422026 2023-05-24 16:58:39,811 INFO [Listener at localhost.localdomain/40087] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 16:58:39,813 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519687 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs/jenkins-hbase20.apache.org%2C43397%2C1684947421602.1684947519687 2023-05-24 16:58:39,911 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:58:39,911 INFO [Listener at localhost.localdomain/40087] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 16:58:39,911 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1191a066 to 127.0.0.1:63859 2023-05-24 16:58:39,912 DEBUG [Listener at localhost.localdomain/40087] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:39,912 DEBUG [Listener at localhost.localdomain/40087] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:58:39,912 DEBUG [Listener at localhost.localdomain/40087] util.JVMClusterUtil(257): Found active master hash=1757643698, stopped=false 2023-05-24 16:58:39,912 INFO [Listener at localhost.localdomain/40087] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:58:39,914 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:39,914 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:39,914 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:39,914 INFO [Listener at localhost.localdomain/40087] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:58:39,915 DEBUG [Listener at localhost.localdomain/40087] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x502c25f4 to 127.0.0.1:63859 2023-05-24 16:58:39,916 DEBUG [Listener at localhost.localdomain/40087] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:39,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:39,916 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:39,917 INFO [Listener at localhost.localdomain/40087] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43397,1684947421602' ***** 2023-05-24 16:58:39,917 INFO [Listener at localhost.localdomain/40087] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:58:39,918 INFO [RS:0;jenkins-hbase20:43397] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:58:39,918 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:58:39,918 INFO [RS:0;jenkins-hbase20:43397] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:58:39,918 INFO [RS:0;jenkins-hbase20:43397] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:58:39,918 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(3303): Received CLOSE for 8a4d043522699af70c775e2ba14b314d 2023-05-24 16:58:39,919 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(3303): Received CLOSE for 61e4c98c505e89bd0e9298f2ea550855 2023-05-24 16:58:39,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8a4d043522699af70c775e2ba14b314d, disabling compactions & flushes 2023-05-24 16:58:39,919 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(3303): Received CLOSE for d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:39,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:58:39,919 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:39,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:58:39,919 DEBUG [RS:0;jenkins-hbase20:43397] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x225a82a0 to 127.0.0.1:63859 2023-05-24 16:58:39,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. after waiting 0 ms 2023-05-24 16:58:39,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:58:39,920 DEBUG [RS:0;jenkins-hbase20:43397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:39,920 INFO [RS:0;jenkins-hbase20:43397] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:58:39,920 INFO [RS:0;jenkins-hbase20:43397] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:58:39,920 INFO [RS:0;jenkins-hbase20:43397] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:58:39,920 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:58:39,923 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-24 16:58:39,923 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1478): Online Regions={8a4d043522699af70c775e2ba14b314d=hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d., 61e4c98c505e89bd0e9298f2ea550855=TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855., 1588230740=hbase:meta,,1.1588230740, d851bfc9d4267e9a867e7eaba3161e76=TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.} 2023-05-24 16:58:39,923 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:58:39,923 DEBUG [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1504): Waiting on 1588230740, 61e4c98c505e89bd0e9298f2ea550855, 8a4d043522699af70c775e2ba14b314d, d851bfc9d4267e9a867e7eaba3161e76 2023-05-24 16:58:39,923 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:58:39,925 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:58:39,925 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:58:39,925 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:58:39,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/namespace/8a4d043522699af70c775e2ba14b314d/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 16:58:39,932 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-24 16:58:39,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:58:39,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8a4d043522699af70c775e2ba14b314d: 2023-05-24 16:58:39,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684947422262.8a4d043522699af70c775e2ba14b314d. 2023-05-24 16:58:39,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 61e4c98c505e89bd0e9298f2ea550855, disabling compactions & flushes 2023-05-24 16:58:39,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:58:39,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:58:39,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. after waiting 0 ms 2023-05-24 16:58:39,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:58:39,934 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 16:58:39,934 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-bottom] to archive 2023-05-24 16:58:39,934 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:58:39,934 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:58:39,934 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 16:58:39,935 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 16:58:39,936 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:58:39,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/61e4c98c505e89bd0e9298f2ea550855/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-05-24 16:58:39,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 61e4c98c505e89bd0e9298f2ea550855: 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1684947445096.61e4c98c505e89bd0e9298f2ea550855. 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d851bfc9d4267e9a867e7eaba3161e76, disabling compactions & flushes 2023-05-24 16:58:39,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. after waiting 0 ms 2023-05-24 16:58:39,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:39,952 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70->hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d34c9fdcfa19fb58cb6981fec1d08c70/info/3fa79f899c2141bfb7200fd5d9758810-top, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/5c6baf607a1f47c28fde22701fd5132b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/b5745942a2404d1a908de22d001a416d, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c86de4e4c5064318a00d92d50627d5b2, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e3233ded7ae40aeb82e6239dc6f4570, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/654b467854aa421083f6912a145f3897, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/8592c13c58b847a295566309e03b6f57, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ac357539e5e243268de8c3e8174d405e, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a] to archive 2023-05-24 16:58:39,952 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 16:58:39,954 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/3fa79f899c2141bfb7200fd5d9758810.d34c9fdcfa19fb58cb6981fec1d08c70 2023-05-24 16:58:39,955 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/5c6baf607a1f47c28fde22701fd5132b to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/5c6baf607a1f47c28fde22701fd5132b 2023-05-24 16:58:39,956 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/TestLogRolling-testLogRolling=d34c9fdcfa19fb58cb6981fec1d08c70-227ee3d9068e45e192beb4d6eee0c22e 2023-05-24 16:58:39,958 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e332edeb6df488fba7116ad42b3bc22 2023-05-24 16:58:39,959 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/b5745942a2404d1a908de22d001a416d to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/b5745942a2404d1a908de22d001a416d 2023-05-24 16:58:39,960 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/7de31d413d8941869a731e54926040fe 2023-05-24 16:58:39,961 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/e2300d62fa744fdb8577b7d5b38fb05e 2023-05-24 16:58:39,962 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c86de4e4c5064318a00d92d50627d5b2 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c86de4e4c5064318a00d92d50627d5b2 2023-05-24 16:58:39,963 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/92210c166362460a8df2332328603076 2023-05-24 16:58:39,965 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ec32f462bef6436da4358af3b42f950d 2023-05-24 16:58:39,966 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e3233ded7ae40aeb82e6239dc6f4570 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/4e3233ded7ae40aeb82e6239dc6f4570 2023-05-24 16:58:39,967 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/1a34444983d0485cb1e6d13227dc0370 2023-05-24 16:58:39,968 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2ed46dcaead5411b92f0ece8a4dbe64b 2023-05-24 16:58:39,969 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/654b467854aa421083f6912a145f3897 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/654b467854aa421083f6912a145f3897 2023-05-24 16:58:39,971 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/c063224d4f6748d0b1e99d1e5b1827a9 2023-05-24 16:58:39,972 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/21096b128dd74992809e6430c5d19ef8 2023-05-24 16:58:39,973 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/8592c13c58b847a295566309e03b6f57 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/8592c13c58b847a295566309e03b6f57 2023-05-24 16:58:39,974 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d22a5981b87449a588c20ad88414091c 2023-05-24 16:58:39,976 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/0565d77cbc52413fa11d325f16f4edd8 2023-05-24 16:58:39,977 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ac357539e5e243268de8c3e8174d405e to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/ac357539e5e243268de8c3e8174d405e 2023-05-24 16:58:39,979 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/d4b194b8eeec45c986fdbf8c115c8791 2023-05-24 16:58:39,980 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131 to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2adca9bfdcb345a99cdf2e23ff944131 2023-05-24 16:58:39,981 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a to hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/archive/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/info/2b981be8e0ab437fbb8f2cb2bfc56f4a 2023-05-24 16:58:39,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/data/default/TestLogRolling-testLogRolling/d851bfc9d4267e9a867e7eaba3161e76/recovered.edits/339.seqid, newMaxSeqId=339, maxSeqId=88 2023-05-24 16:58:39,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:39,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d851bfc9d4267e9a867e7eaba3161e76: 2023-05-24 16:58:39,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1684947445096.d851bfc9d4267e9a867e7eaba3161e76. 2023-05-24 16:58:40,125 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43397,1684947421602; all regions closed. 2023-05-24 16:58:40,126 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:40,138 DEBUG [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs 2023-05-24 16:58:40,138 INFO [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43397%2C1684947421602.meta:.meta(num 1684947422214) 2023-05-24 16:58:40,139 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/WALs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:40,146 DEBUG [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/oldWALs 2023-05-24 16:58:40,146 INFO [RS:0;jenkins-hbase20:43397] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43397%2C1684947421602:(num 1684947519803) 2023-05-24 16:58:40,146 DEBUG [RS:0;jenkins-hbase20:43397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:40,146 INFO [RS:0;jenkins-hbase20:43397] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:58:40,147 INFO [RS:0;jenkins-hbase20:43397] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 16:58:40,147 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:58:40,148 INFO [RS:0;jenkins-hbase20:43397] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43397 2023-05-24 16:58:40,151 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43397,1684947421602 2023-05-24 16:58:40,151 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:58:40,151 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:58:40,152 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43397,1684947421602] 2023-05-24 16:58:40,152 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43397,1684947421602; numProcessing=1 2023-05-24 16:58:40,154 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43397,1684947421602 already deleted, retry=false 2023-05-24 16:58:40,154 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43397,1684947421602 expired; onlineServers=0 2023-05-24 16:58:40,154 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33229,1684947421566' ***** 2023-05-24 16:58:40,154 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:58:40,154 DEBUG [M:0;jenkins-hbase20:33229] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cfc1a72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:58:40,155 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:58:40,155 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33229,1684947421566; all regions closed. 2023-05-24 16:58:40,155 DEBUG [M:0;jenkins-hbase20:33229] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:40,155 DEBUG [M:0;jenkins-hbase20:33229] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:58:40,155 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:58:40,155 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947421783] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947421783,5,FailOnTimeoutGroup] 2023-05-24 16:58:40,155 DEBUG [M:0;jenkins-hbase20:33229] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:58:40,155 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947421783] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947421783,5,FailOnTimeoutGroup] 2023-05-24 16:58:40,157 INFO [M:0;jenkins-hbase20:33229] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:58:40,157 INFO [M:0;jenkins-hbase20:33229] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:58:40,157 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:58:40,157 INFO [M:0;jenkins-hbase20:33229] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:58:40,157 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:40,158 DEBUG [M:0;jenkins-hbase20:33229] master.HMaster(1512): Stopping service threads 2023-05-24 16:58:40,158 INFO [M:0;jenkins-hbase20:33229] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:58:40,158 ERROR [M:0;jenkins-hbase20:33229] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 16:58:40,158 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:58:40,158 INFO [M:0;jenkins-hbase20:33229] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:58:40,158 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:58:40,159 DEBUG [M:0;jenkins-hbase20:33229] zookeeper.ZKUtil(398): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:58:40,159 WARN [M:0;jenkins-hbase20:33229] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:58:40,159 INFO [M:0;jenkins-hbase20:33229] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:58:40,160 INFO [M:0;jenkins-hbase20:33229] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:58:40,160 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:58:40,160 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:40,160 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:40,160 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:58:40,160 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:40,161 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-05-24 16:58:40,175 INFO [M:0;jenkins-hbase20:33229] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc27ce898ea4fefaee2f26da6c49928 2023-05-24 16:58:40,180 INFO [M:0;jenkins-hbase20:33229] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1bc27ce898ea4fefaee2f26da6c49928 2023-05-24 16:58:40,181 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc27ce898ea4fefaee2f26da6c49928 as hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc27ce898ea4fefaee2f26da6c49928 2023-05-24 16:58:40,187 INFO [M:0;jenkins-hbase20:33229] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1bc27ce898ea4fefaee2f26da6c49928 2023-05-24 16:58:40,187 INFO [M:0;jenkins-hbase20:33229] regionserver.HStore(1080): Added hdfs://localhost.localdomain:42999/user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc27ce898ea4fefaee2f26da6c49928, entries=18, sequenceid=160, filesize=6.9 K 2023-05-24 16:58:40,188 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=160, compaction requested=false 2023-05-24 16:58:40,189 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:40,189 DEBUG [M:0;jenkins-hbase20:33229] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:58:40,190 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/907d40d5-11e0-d36e-1f55-07c11b5b01a6/MasterData/WALs/jenkins-hbase20.apache.org,33229,1684947421566 2023-05-24 16:58:40,193 INFO [M:0;jenkins-hbase20:33229] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:58:40,193 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:58:40,194 INFO [M:0;jenkins-hbase20:33229] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33229 2023-05-24 16:58:40,196 DEBUG [M:0;jenkins-hbase20:33229] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33229,1684947421566 already deleted, retry=false 2023-05-24 16:58:40,253 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:40,253 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): regionserver:43397-0x1017e6761930001, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:40,253 INFO [RS:0;jenkins-hbase20:43397] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43397,1684947421602; zookeeper connection closed. 2023-05-24 16:58:40,253 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e1ce104] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e1ce104 2023-05-24 16:58:40,254 INFO [Listener at localhost.localdomain/40087] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 16:58:40,353 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:40,353 INFO [M:0;jenkins-hbase20:33229] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33229,1684947421566; zookeeper connection closed. 2023-05-24 16:58:40,353 DEBUG [Listener at localhost.localdomain/40087-EventThread] zookeeper.ZKWatcher(600): master:33229-0x1017e6761930000, quorum=127.0.0.1:63859, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:40,354 WARN [Listener at localhost.localdomain/40087] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:58:40,360 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:58:40,472 WARN [BP-1108388780-148.251.75.209-1684947421095 heartbeating to localhost.localdomain/127.0.0.1:42999] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:58:40,472 WARN [BP-1108388780-148.251.75.209-1684947421095 heartbeating to localhost.localdomain/127.0.0.1:42999] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1108388780-148.251.75.209-1684947421095 (Datanode Uuid 308083e5-b77d-4d1a-8e0f-33cf352c941b) service to localhost.localdomain/127.0.0.1:42999 2023-05-24 16:58:40,474 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/dfs/data/data3/current/BP-1108388780-148.251.75.209-1684947421095] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:40,475 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/dfs/data/data4/current/BP-1108388780-148.251.75.209-1684947421095] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:40,478 WARN [Listener at localhost.localdomain/40087] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:58:40,482 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:58:40,588 WARN [BP-1108388780-148.251.75.209-1684947421095 heartbeating to localhost.localdomain/127.0.0.1:42999] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:58:40,588 WARN [BP-1108388780-148.251.75.209-1684947421095 heartbeating to localhost.localdomain/127.0.0.1:42999] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1108388780-148.251.75.209-1684947421095 (Datanode Uuid 23a7f3e4-6f92-4be5-b789-80176b3c09bb) service to localhost.localdomain/127.0.0.1:42999 2023-05-24 16:58:40,589 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/dfs/data/data1/current/BP-1108388780-148.251.75.209-1684947421095] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:40,590 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/cluster_99be783c-4e33-c3e1-7949-4bfc6a855d8c/dfs/data/data2/current/BP-1108388780-148.251.75.209-1684947421095] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:40,611 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:58:40,735 INFO [Listener at localhost.localdomain/40087] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:58:40,764 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:58:40,773 INFO [Listener at localhost.localdomain/40087] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=108 (was 95) - Thread LEAK? -, OpenFileDescriptor=530 (was 499) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=30 (was 70), ProcessCount=166 (was 169), AvailableMemoryMB=9429 (was 9846) 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=108, OpenFileDescriptor=530, MaxFileDescriptor=60000, SystemLoadAverage=30, ProcessCount=166, AvailableMemoryMB=9429 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/hadoop.log.dir so I do NOT create it in target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/00b71ee7-d3e0-88fc-65a1-781b2c55dc30/hadoop.tmp.dir so I do NOT create it in target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a, deleteOnExit=true 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/test.cache.data in system properties and HBase conf 2023-05-24 16:58:40,782 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/hadoop.log.dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 16:58:40,783 DEBUG [Listener at localhost.localdomain/40087] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 16:58:40,783 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/nfs.dump.dir in system properties and HBase conf 2023-05-24 16:58:40,784 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/java.io.tmpdir in system properties and HBase conf 2023-05-24 16:58:40,785 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 16:58:40,785 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 16:58:40,785 INFO [Listener at localhost.localdomain/40087] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 16:58:40,787 WARN [Listener at localhost.localdomain/40087] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:58:40,788 WARN [Listener at localhost.localdomain/40087] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:58:40,788 WARN [Listener at localhost.localdomain/40087] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:58:40,812 WARN [Listener at localhost.localdomain/40087] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:58:40,814 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:58:40,818 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/java.io.tmpdir/Jetty_localhost_localdomain_37261_hdfs____s5k7s3/webapp 2023-05-24 16:58:40,891 INFO [Listener at localhost.localdomain/40087] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37261 2023-05-24 16:58:40,893 WARN [Listener at localhost.localdomain/40087] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 16:58:40,894 WARN [Listener at localhost.localdomain/40087] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 16:58:40,894 WARN [Listener at localhost.localdomain/40087] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 16:58:40,916 WARN [Listener at localhost.localdomain/39627] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:58:40,925 WARN [Listener at localhost.localdomain/39627] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:58:40,927 WARN [Listener at localhost.localdomain/39627] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:58:40,928 INFO [Listener at localhost.localdomain/39627] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:58:40,932 INFO [Listener at localhost.localdomain/39627] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/java.io.tmpdir/Jetty_localhost_35539_datanode____4g5540/webapp 2023-05-24 16:58:41,002 INFO [Listener at localhost.localdomain/39627] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35539 2023-05-24 16:58:41,008 WARN [Listener at localhost.localdomain/32825] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:58:41,020 WARN [Listener at localhost.localdomain/32825] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 16:58:41,022 WARN [Listener at localhost.localdomain/32825] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 16:58:41,023 INFO [Listener at localhost.localdomain/32825] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 16:58:41,026 INFO [Listener at localhost.localdomain/32825] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/java.io.tmpdir/Jetty_localhost_38891_datanode____.sgod72/webapp 2023-05-24 16:58:41,058 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1947824f302ba535: Processing first storage report for DS-8c52f71e-4338-4483-b73f-c9e07391eaaa from datanode 32c68eb5-c275-42dd-ba52-4547d6c51c9d 2023-05-24 16:58:41,058 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1947824f302ba535: from storage DS-8c52f71e-4338-4483-b73f-c9e07391eaaa node DatanodeRegistration(127.0.0.1:39381, datanodeUuid=32c68eb5-c275-42dd-ba52-4547d6c51c9d, infoPort=46511, infoSecurePort=0, ipcPort=32825, storageInfo=lv=-57;cid=testClusterID;nsid=1853031965;c=1684947520790), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:58:41,058 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1947824f302ba535: Processing first storage report for DS-a884a5cf-0398-4121-9711-6d474fd29eff from datanode 32c68eb5-c275-42dd-ba52-4547d6c51c9d 2023-05-24 16:58:41,058 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1947824f302ba535: from storage DS-a884a5cf-0398-4121-9711-6d474fd29eff node DatanodeRegistration(127.0.0.1:39381, datanodeUuid=32c68eb5-c275-42dd-ba52-4547d6c51c9d, infoPort=46511, infoSecurePort=0, ipcPort=32825, storageInfo=lv=-57;cid=testClusterID;nsid=1853031965;c=1684947520790), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:58:41,100 INFO [Listener at localhost.localdomain/32825] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38891 2023-05-24 16:58:41,107 WARN [Listener at localhost.localdomain/37067] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 16:58:41,162 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x416e905f6b8694f8: Processing first storage report for DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec from datanode 38dd14bb-459c-4336-813f-ea3867da4661 2023-05-24 16:58:41,162 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x416e905f6b8694f8: from storage DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec node DatanodeRegistration(127.0.0.1:38943, datanodeUuid=38dd14bb-459c-4336-813f-ea3867da4661, infoPort=40069, infoSecurePort=0, ipcPort=37067, storageInfo=lv=-57;cid=testClusterID;nsid=1853031965;c=1684947520790), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:58:41,163 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x416e905f6b8694f8: Processing first storage report for DS-da6c4d37-c357-48e1-ad49-f40424d4a9b8 from datanode 38dd14bb-459c-4336-813f-ea3867da4661 2023-05-24 16:58:41,163 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x416e905f6b8694f8: from storage DS-da6c4d37-c357-48e1-ad49-f40424d4a9b8 node DatanodeRegistration(127.0.0.1:38943, datanodeUuid=38dd14bb-459c-4336-813f-ea3867da4661, infoPort=40069, infoSecurePort=0, ipcPort=37067, storageInfo=lv=-57;cid=testClusterID;nsid=1853031965;c=1684947520790), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 16:58:41,215 DEBUG [Listener at localhost.localdomain/37067] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1 2023-05-24 16:58:41,218 INFO [Listener at localhost.localdomain/37067] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/zookeeper_0, clientPort=55393, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 16:58:41,219 INFO [Listener at localhost.localdomain/37067] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55393 2023-05-24 16:58:41,220 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,221 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,238 INFO [Listener at localhost.localdomain/37067] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1 with version=8 2023-05-24 16:58:41,238 INFO [Listener at localhost.localdomain/37067] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:42025/user/jenkins/test-data/da9e8017-05b3-5024-58ae-9d4fab4e51db/hbase-staging 2023-05-24 16:58:41,240 INFO [Listener at localhost.localdomain/37067] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:58:41,240 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,240 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,240 INFO [Listener at localhost.localdomain/37067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:58:41,241 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,241 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:58:41,241 INFO [Listener at localhost.localdomain/37067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:58:41,243 INFO [Listener at localhost.localdomain/37067] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37517 2023-05-24 16:58:41,243 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,244 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,245 INFO [Listener at localhost.localdomain/37067] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37517 connecting to ZooKeeper ensemble=127.0.0.1:55393 2023-05-24 16:58:41,260 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:375170x0, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:58:41,261 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37517-0x1017e68e6ed0000 connected 2023-05-24 16:58:41,277 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:58:41,278 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:41,279 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:58:41,279 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37517 2023-05-24 16:58:41,280 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37517 2023-05-24 16:58:41,280 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37517 2023-05-24 16:58:41,281 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37517 2023-05-24 16:58:41,281 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37517 2023-05-24 16:58:41,281 INFO [Listener at localhost.localdomain/37067] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1, hbase.cluster.distributed=false 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 16:58:41,298 INFO [Listener at localhost.localdomain/37067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 16:58:41,301 INFO [Listener at localhost.localdomain/37067] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33003 2023-05-24 16:58:41,302 INFO [Listener at localhost.localdomain/37067] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 16:58:41,303 DEBUG [Listener at localhost.localdomain/37067] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 16:58:41,303 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,304 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,305 INFO [Listener at localhost.localdomain/37067] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33003 connecting to ZooKeeper ensemble=127.0.0.1:55393 2023-05-24 16:58:41,308 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:330030x0, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 16:58:41,309 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): regionserver:330030x0, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:58:41,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33003-0x1017e68e6ed0001 connected 2023-05-24 16:58:41,310 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:41,310 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ZKUtil(164): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 16:58:41,311 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33003 2023-05-24 16:58:41,311 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33003 2023-05-24 16:58:41,311 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33003 2023-05-24 16:58:41,314 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33003 2023-05-24 16:58:41,314 DEBUG [Listener at localhost.localdomain/37067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33003 2023-05-24 16:58:41,316 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,317 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:58:41,317 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,318 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:58:41,318 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 16:58:41,318 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,319 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:58:41,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 16:58:41,319 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,37517,1684947521239 from backup master directory 2023-05-24 16:58:41,320 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,320 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 16:58:41,320 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:58:41,320 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/hbase.id with ID: 0bffb039-e0e5-4131-a606-e4f64642e527 2023-05-24 16:58:41,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:41,352 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,360 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x11c99e09 to 127.0.0.1:55393 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:58:41,371 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42f1e67f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:58:41,372 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 16:58:41,372 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 16:58:41,373 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:58:41,375 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store-tmp 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:58:41,386 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:41,386 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:58:41,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/WALs/jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,389 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37517%2C1684947521239, suffix=, logDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/WALs/jenkins-hbase20.apache.org,37517,1684947521239, archiveDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/oldWALs, maxLogs=10 2023-05-24 16:58:41,393 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/WALs/jenkins-hbase20.apache.org,37517,1684947521239/jenkins-hbase20.apache.org%2C37517%2C1684947521239.1684947521389 2023-05-24 16:58:41,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39381,DS-8c52f71e-4338-4483-b73f-c9e07391eaaa,DISK], DatanodeInfoWithStorage[127.0.0.1:38943,DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec,DISK]] 2023-05-24 16:58:41,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:58:41,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:41,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,394 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,396 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,398 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 16:58:41,398 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 16:58:41,399 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,400 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,400 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,403 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 16:58:41,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:58:41,405 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=744160, jitterRate=-0.05375252664089203}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:58:41,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:58:41,406 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 16:58:41,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 16:58:41,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 16:58:41,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 16:58:41,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 16:58:41,408 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 16:58:41,408 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 16:58:41,409 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 16:58:41,409 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 16:58:41,421 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 16:58:41,421 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 16:58:41,422 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 16:58:41,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 16:58:41,422 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 16:58:41,423 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,424 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 16:58:41,424 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 16:58:41,424 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 16:58:41,425 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:41,425 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:41,425 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,425 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,37517,1684947521239, sessionid=0x1017e68e6ed0000, setting cluster-up flag (Was=false) 2023-05-24 16:58:41,428 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,430 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 16:58:41,431 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,433 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,435 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 16:58:41,436 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:41,436 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.hbase-snapshot/.tmp 2023-05-24 16:58:41,439 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:58:41,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,441 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684947551441 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 16:58:41,442 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 16:58:41,443 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:58:41,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 16:58:41,443 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 16:58:41,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 16:58:41,444 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 16:58:41,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947521445,5,FailOnTimeoutGroup] 2023-05-24 16:58:41,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947521445,5,FailOnTimeoutGroup] 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,445 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,454 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:58:41,454 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 16:58:41,454 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1 2023-05-24 16:58:41,463 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:41,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:58:41,466 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/info 2023-05-24 16:58:41,466 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:58:41,466 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,467 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:58:41,468 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:58:41,468 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:58:41,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:58:41,469 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/table 2023-05-24 16:58:41,469 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:58:41,470 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,470 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740 2023-05-24 16:58:41,471 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740 2023-05-24 16:58:41,472 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:58:41,473 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:58:41,475 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721126, jitterRate=-0.08304207026958466}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:58:41,475 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:58:41,475 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:58:41,475 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:58:41,476 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 16:58:41,476 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 16:58:41,476 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 16:58:41,478 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 16:58:41,479 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 16:58:41,517 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(951): ClusterId : 0bffb039-e0e5-4131-a606-e4f64642e527 2023-05-24 16:58:41,518 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 16:58:41,521 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 16:58:41,521 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 16:58:41,523 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 16:58:41,524 DEBUG [RS:0;jenkins-hbase20:33003] zookeeper.ReadOnlyZKClient(139): Connect 0x3c414ddf to 127.0.0.1:55393 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:58:41,529 DEBUG [RS:0;jenkins-hbase20:33003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3286b427, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:58:41,529 DEBUG [RS:0;jenkins-hbase20:33003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36b926af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:58:41,541 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:33003 2023-05-24 16:58:41,541 INFO [RS:0;jenkins-hbase20:33003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 16:58:41,541 INFO [RS:0;jenkins-hbase20:33003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 16:58:41,541 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 16:58:41,541 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,37517,1684947521239 with isa=jenkins-hbase20.apache.org/148.251.75.209:33003, startcode=1684947521297 2023-05-24 16:58:41,542 DEBUG [RS:0;jenkins-hbase20:33003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 16:58:41,545 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55235, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 16:58:41,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37517] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,546 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1 2023-05-24 16:58:41,546 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39627 2023-05-24 16:58:41,546 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 16:58:41,547 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:58:41,548 DEBUG [RS:0;jenkins-hbase20:33003] zookeeper.ZKUtil(162): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,548 WARN [RS:0;jenkins-hbase20:33003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 16:58:41,548 INFO [RS:0;jenkins-hbase20:33003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:58:41,548 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,33003,1684947521297] 2023-05-24 16:58:41,548 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,552 DEBUG [RS:0;jenkins-hbase20:33003] zookeeper.ZKUtil(162): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,553 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 16:58:41,553 INFO [RS:0;jenkins-hbase20:33003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 16:58:41,554 INFO [RS:0;jenkins-hbase20:33003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 16:58:41,554 INFO [RS:0;jenkins-hbase20:33003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 16:58:41,554 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,555 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 16:58:41,556 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,557 DEBUG [RS:0;jenkins-hbase20:33003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 16:58:41,558 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,558 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,558 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,567 INFO [RS:0;jenkins-hbase20:33003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 16:58:41,567 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33003,1684947521297-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,576 INFO [RS:0;jenkins-hbase20:33003] regionserver.Replication(203): jenkins-hbase20.apache.org,33003,1684947521297 started 2023-05-24 16:58:41,576 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,33003,1684947521297, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:33003, sessionid=0x1017e68e6ed0001 2023-05-24 16:58:41,577 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 16:58:41,577 DEBUG [RS:0;jenkins-hbase20:33003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,577 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33003,1684947521297' 2023-05-24 16:58:41,577 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33003,1684947521297' 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 16:58:41,578 DEBUG [RS:0;jenkins-hbase20:33003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 16:58:41,579 INFO [RS:0;jenkins-hbase20:33003] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 16:58:41,579 INFO [RS:0;jenkins-hbase20:33003] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 16:58:41,629 DEBUG [jenkins-hbase20:37517] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 16:58:41,630 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33003,1684947521297, state=OPENING 2023-05-24 16:58:41,631 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 16:58:41,632 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:41,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33003,1684947521297}] 2023-05-24 16:58:41,634 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:58:41,682 INFO [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33003%2C1684947521297, suffix=, logDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297, archiveDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs, maxLogs=32 2023-05-24 16:58:41,696 INFO [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297/jenkins-hbase20.apache.org%2C33003%2C1684947521297.1684947521684 2023-05-24 16:58:41,696 DEBUG [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38943,DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec,DISK], DatanodeInfoWithStorage[127.0.0.1:39381,DS-8c52f71e-4338-4483-b73f-c9e07391eaaa,DISK]] 2023-05-24 16:58:41,790 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:41,790 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 16:58:41,796 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43732, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 16:58:41,798 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 16:58:41,799 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:58:41,800 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33003%2C1684947521297.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297, archiveDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs, maxLogs=32 2023-05-24 16:58:41,805 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297/jenkins-hbase20.apache.org%2C33003%2C1684947521297.meta.1684947521800.meta 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39381,DS-8c52f71e-4338-4483-b73f-c9e07391eaaa,DISK], DatanodeInfoWithStorage[127.0.0.1:38943,DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec,DISK]] 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 16:58:41,805 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 16:58:41,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:41,806 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 16:58:41,806 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 16:58:41,808 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 16:58:41,809 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/info 2023-05-24 16:58:41,809 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/info 2023-05-24 16:58:41,809 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 16:58:41,810 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,810 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 16:58:41,811 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:58:41,811 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/rep_barrier 2023-05-24 16:58:41,811 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 16:58:41,811 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,812 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 16:58:41,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/table 2023-05-24 16:58:41,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/table 2023-05-24 16:58:41,813 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 16:58:41,813 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:41,814 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740 2023-05-24 16:58:41,815 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740 2023-05-24 16:58:41,817 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 16:58:41,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 16:58:41,819 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=713644, jitterRate=-0.09255523979663849}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 16:58:41,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 16:58:41,821 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684947521789 2023-05-24 16:58:41,825 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 16:58:41,825 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 16:58:41,826 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33003,1684947521297, state=OPEN 2023-05-24 16:58:41,827 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 16:58:41,827 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 16:58:41,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 16:58:41,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33003,1684947521297 in 194 msec 2023-05-24 16:58:41,831 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 16:58:41,831 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 354 msec 2023-05-24 16:58:41,833 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 395 msec 2023-05-24 16:58:41,833 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684947521833, completionTime=-1 2023-05-24 16:58:41,833 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 16:58:41,833 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 16:58:41,835 DEBUG [hconnection-0x69b06a18-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:58:41,837 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43736, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:58:41,838 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 16:58:41,838 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684947581838 2023-05-24 16:58:41,838 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684947641838 2023-05-24 16:58:41,838 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-24 16:58:41,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37517,1684947521239-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37517,1684947521239-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37517,1684947521239-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:37517, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 16:58:41,844 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 16:58:41,844 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 16:58:41,845 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 16:58:41,845 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 16:58:41,846 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 16:58:41,847 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 16:58:41,848 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.tmp/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:41,849 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.tmp/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8 empty. 2023-05-24 16:58:41,849 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.tmp/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:41,849 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 16:58:41,859 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 16:58:41,860 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cbba62827b7ef81f8404f52fe2a77ef8, NAME => 'hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/.tmp 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cbba62827b7ef81f8404f52fe2a77ef8, disabling compactions & flushes 2023-05-24 16:58:41,868 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. after waiting 0 ms 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:41,868 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:41,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cbba62827b7ef81f8404f52fe2a77ef8: 2023-05-24 16:58:41,870 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 16:58:41,871 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947521871"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684947521871"}]},"ts":"1684947521871"} 2023-05-24 16:58:41,873 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 16:58:41,874 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 16:58:41,874 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947521874"}]},"ts":"1684947521874"} 2023-05-24 16:58:41,875 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 16:58:41,879 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cbba62827b7ef81f8404f52fe2a77ef8, ASSIGN}] 2023-05-24 16:58:41,882 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cbba62827b7ef81f8404f52fe2a77ef8, ASSIGN 2023-05-24 16:58:41,882 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cbba62827b7ef81f8404f52fe2a77ef8, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33003,1684947521297; forceNewPlan=false, retain=false 2023-05-24 16:58:41,905 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:58:42,034 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cbba62827b7ef81f8404f52fe2a77ef8, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,034 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947522033"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684947522033"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684947522033"}]},"ts":"1684947522033"} 2023-05-24 16:58:42,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure cbba62827b7ef81f8404f52fe2a77ef8, server=jenkins-hbase20.apache.org,33003,1684947521297}] 2023-05-24 16:58:42,194 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,194 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cbba62827b7ef81f8404f52fe2a77ef8, NAME => 'hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.', STARTKEY => '', ENDKEY => ''} 2023-05-24 16:58:42,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 16:58:42,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,198 INFO [StoreOpener-cbba62827b7ef81f8404f52fe2a77ef8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,201 DEBUG [StoreOpener-cbba62827b7ef81f8404f52fe2a77ef8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/info 2023-05-24 16:58:42,201 DEBUG [StoreOpener-cbba62827b7ef81f8404f52fe2a77ef8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/info 2023-05-24 16:58:42,202 INFO [StoreOpener-cbba62827b7ef81f8404f52fe2a77ef8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cbba62827b7ef81f8404f52fe2a77ef8 columnFamilyName info 2023-05-24 16:58:42,203 INFO [StoreOpener-cbba62827b7ef81f8404f52fe2a77ef8-1] regionserver.HStore(310): Store=cbba62827b7ef81f8404f52fe2a77ef8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 16:58:42,204 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,205 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,210 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 16:58:42,213 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened cbba62827b7ef81f8404f52fe2a77ef8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=714124, jitterRate=-0.09194479882717133}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 16:58:42,214 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for cbba62827b7ef81f8404f52fe2a77ef8: 2023-05-24 16:58:42,216 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8., pid=6, masterSystemTime=1684947522188 2023-05-24 16:58:42,219 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,219 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,220 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cbba62827b7ef81f8404f52fe2a77ef8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,220 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684947522220"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684947522220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684947522220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684947522220"}]},"ts":"1684947522220"} 2023-05-24 16:58:42,226 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 16:58:42,226 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure cbba62827b7ef81f8404f52fe2a77ef8, server=jenkins-hbase20.apache.org,33003,1684947521297 in 188 msec 2023-05-24 16:58:42,229 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 16:58:42,229 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cbba62827b7ef81f8404f52fe2a77ef8, ASSIGN in 348 msec 2023-05-24 16:58:42,230 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 16:58:42,231 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684947522231"}]},"ts":"1684947522231"} 2023-05-24 16:58:42,233 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 16:58:42,236 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 16:58:42,238 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 392 msec 2023-05-24 16:58:42,246 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 16:58:42,253 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:58:42,254 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:42,261 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 16:58:42,271 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:58:42,274 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-05-24 16:58:42,284 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 16:58:42,291 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 16:58:42,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-24 16:58:42,308 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 16:58:42,310 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 16:58:42,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.990sec 2023-05-24 16:58:42,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 16:58:42,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 16:58:42,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 16:58:42,310 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37517,1684947521239-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 16:58:42,311 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37517,1684947521239-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 16:58:42,313 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 16:58:42,317 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ReadOnlyZKClient(139): Connect 0x26b9558d to 127.0.0.1:55393 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 16:58:42,323 DEBUG [Listener at localhost.localdomain/37067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@731e2d00, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 16:58:42,326 DEBUG [hconnection-0x2102442a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 16:58:42,328 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43742, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 16:58:42,330 INFO [Listener at localhost.localdomain/37067] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:42,330 INFO [Listener at localhost.localdomain/37067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 16:58:42,337 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 16:58:42,337 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:42,338 INFO [Listener at localhost.localdomain/37067] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 16:58:42,339 INFO [Listener at localhost.localdomain/37067] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 16:58:42,341 INFO [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs, maxLogs=32 2023-05-24 16:58:42,350 INFO [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1/test.com%2C8080%2C1.1684947522342 2023-05-24 16:58:42,350 DEBUG [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39381,DS-8c52f71e-4338-4483-b73f-c9e07391eaaa,DISK], DatanodeInfoWithStorage[127.0.0.1:38943,DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec,DISK]] 2023-05-24 16:58:42,358 INFO [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1/test.com%2C8080%2C1.1684947522342 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1/test.com%2C8080%2C1.1684947522350 2023-05-24 16:58:42,358 DEBUG [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38943,DS-078bfb99-0db8-40b5-9b69-7ddf34a42dec,DISK], DatanodeInfoWithStorage[127.0.0.1:39381,DS-8c52f71e-4338-4483-b73f-c9e07391eaaa,DISK]] 2023-05-24 16:58:42,359 DEBUG [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1/test.com%2C8080%2C1.1684947522342 is not closed yet, will try archiving it next time 2023-05-24 16:58:42,361 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1 2023-05-24 16:58:42,372 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/test.com,8080,1/test.com%2C8080%2C1.1684947522342 to hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs/test.com%2C8080%2C1.1684947522342 2023-05-24 16:58:42,375 DEBUG [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs 2023-05-24 16:58:42,375 INFO [Listener at localhost.localdomain/37067] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1684947522350) 2023-05-24 16:58:42,375 INFO [Listener at localhost.localdomain/37067] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 16:58:42,375 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26b9558d to 127.0.0.1:55393 2023-05-24 16:58:42,375 DEBUG [Listener at localhost.localdomain/37067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:42,379 DEBUG [Listener at localhost.localdomain/37067] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 16:58:42,379 DEBUG [Listener at localhost.localdomain/37067] util.JVMClusterUtil(257): Found active master hash=696825864, stopped=false 2023-05-24 16:58:42,379 INFO [Listener at localhost.localdomain/37067] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:42,380 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:42,380 INFO [Listener at localhost.localdomain/37067] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 16:58:42,380 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:42,380 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 16:58:42,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:42,381 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 16:58:42,381 DEBUG [Listener at localhost.localdomain/37067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11c99e09 to 127.0.0.1:55393 2023-05-24 16:58:42,381 DEBUG [Listener at localhost.localdomain/37067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:42,382 INFO [Listener at localhost.localdomain/37067] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33003,1684947521297' ***** 2023-05-24 16:58:42,382 INFO [Listener at localhost.localdomain/37067] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 16:58:42,382 INFO [RS:0;jenkins-hbase20:33003] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 16:58:42,382 INFO [RS:0;jenkins-hbase20:33003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 16:58:42,382 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 16:58:42,382 INFO [RS:0;jenkins-hbase20:33003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 16:58:42,383 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(3303): Received CLOSE for cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,386 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,386 DEBUG [RS:0;jenkins-hbase20:33003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c414ddf to 127.0.0.1:55393 2023-05-24 16:58:42,387 DEBUG [RS:0;jenkins-hbase20:33003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:42,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing cbba62827b7ef81f8404f52fe2a77ef8, disabling compactions & flushes 2023-05-24 16:58:42,387 INFO [RS:0;jenkins-hbase20:33003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 16:58:42,387 INFO [RS:0;jenkins-hbase20:33003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 16:58:42,387 INFO [RS:0;jenkins-hbase20:33003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 16:58:42,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,387 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 16:58:42,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. after waiting 0 ms 2023-05-24 16:58:42,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing cbba62827b7ef81f8404f52fe2a77ef8 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 16:58:42,387 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-24 16:58:42,387 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1478): Online Regions={cbba62827b7ef81f8404f52fe2a77ef8=hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8., 1588230740=hbase:meta,,1.1588230740} 2023-05-24 16:58:42,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 16:58:42,388 DEBUG [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1504): Waiting on 1588230740, cbba62827b7ef81f8404f52fe2a77ef8 2023-05-24 16:58:42,388 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 16:58:42,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 16:58:42,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 16:58:42,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 16:58:42,388 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-24 16:58:42,397 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/.tmp/info/45ad6a17826e489cab79277947c02e4b 2023-05-24 16:58:42,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/.tmp/info/f14c8c2ec714446ab1d41bf0f184c24c 2023-05-24 16:58:42,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/.tmp/info/f14c8c2ec714446ab1d41bf0f184c24c as hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/info/f14c8c2ec714446ab1d41bf0f184c24c 2023-05-24 16:58:42,416 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/.tmp/table/095ffd9731ea48cd82edd01cd588f542 2023-05-24 16:58:42,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/info/f14c8c2ec714446ab1d41bf0f184c24c, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 16:58:42,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for cbba62827b7ef81f8404f52fe2a77ef8 in 30ms, sequenceid=6, compaction requested=false 2023-05-24 16:58:42,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/namespace/cbba62827b7ef81f8404f52fe2a77ef8/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 16:58:42,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for cbba62827b7ef81f8404f52fe2a77ef8: 2023-05-24 16:58:42,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684947521844.cbba62827b7ef81f8404f52fe2a77ef8. 2023-05-24 16:58:42,424 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/.tmp/info/45ad6a17826e489cab79277947c02e4b as hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/info/45ad6a17826e489cab79277947c02e4b 2023-05-24 16:58:42,429 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/info/45ad6a17826e489cab79277947c02e4b, entries=10, sequenceid=9, filesize=5.9 K 2023-05-24 16:58:42,430 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/.tmp/table/095ffd9731ea48cd82edd01cd588f542 as hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/table/095ffd9731ea48cd82edd01cd588f542 2023-05-24 16:58:42,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/table/095ffd9731ea48cd82edd01cd588f542, entries=2, sequenceid=9, filesize=4.7 K 2023-05-24 16:58:42,436 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 48ms, sequenceid=9, compaction requested=false 2023-05-24 16:58:42,441 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-24 16:58:42,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 16:58:42,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 16:58:42,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 16:58:42,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 16:58:42,566 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 16:58:42,566 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 16:58:42,588 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33003,1684947521297; all regions closed. 2023-05-24 16:58:42,588 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,594 DEBUG [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs 2023-05-24 16:58:42,594 INFO [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C33003%2C1684947521297.meta:.meta(num 1684947521800) 2023-05-24 16:58:42,595 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/WALs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,601 DEBUG [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/oldWALs 2023-05-24 16:58:42,601 INFO [RS:0;jenkins-hbase20:33003] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C33003%2C1684947521297:(num 1684947521684) 2023-05-24 16:58:42,601 DEBUG [RS:0;jenkins-hbase20:33003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:42,601 INFO [RS:0;jenkins-hbase20:33003] regionserver.LeaseManager(133): Closed leases 2023-05-24 16:58:42,602 INFO [RS:0;jenkins-hbase20:33003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 16:58:42,602 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:58:42,602 INFO [RS:0;jenkins-hbase20:33003] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33003 2023-05-24 16:58:42,604 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:58:42,604 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33003,1684947521297 2023-05-24 16:58:42,604 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 16:58:42,605 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,33003,1684947521297] 2023-05-24 16:58:42,605 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,33003,1684947521297; numProcessing=1 2023-05-24 16:58:42,605 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,33003,1684947521297 already deleted, retry=false 2023-05-24 16:58:42,605 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,33003,1684947521297 expired; onlineServers=0 2023-05-24 16:58:42,605 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,37517,1684947521239' ***** 2023-05-24 16:58:42,605 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 16:58:42,605 DEBUG [M:0;jenkins-hbase20:37517] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db729e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 16:58:42,605 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:42,605 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37517,1684947521239; all regions closed. 2023-05-24 16:58:42,605 DEBUG [M:0;jenkins-hbase20:37517] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 16:58:42,606 DEBUG [M:0;jenkins-hbase20:37517] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 16:58:42,606 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 16:58:42,606 DEBUG [M:0;jenkins-hbase20:37517] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 16:58:42,606 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947521445] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684947521445,5,FailOnTimeoutGroup] 2023-05-24 16:58:42,606 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947521445] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684947521445,5,FailOnTimeoutGroup] 2023-05-24 16:58:42,606 INFO [M:0;jenkins-hbase20:37517] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 16:58:42,607 INFO [M:0;jenkins-hbase20:37517] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 16:58:42,607 INFO [M:0;jenkins-hbase20:37517] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 16:58:42,607 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 16:58:42,607 DEBUG [M:0;jenkins-hbase20:37517] master.HMaster(1512): Stopping service threads 2023-05-24 16:58:42,607 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 16:58:42,607 INFO [M:0;jenkins-hbase20:37517] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 16:58:42,607 ERROR [M:0;jenkins-hbase20:37517] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-24 16:58:42,608 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 16:58:42,608 INFO [M:0;jenkins-hbase20:37517] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 16:58:42,608 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 16:58:42,608 DEBUG [M:0;jenkins-hbase20:37517] zookeeper.ZKUtil(398): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 16:58:42,608 WARN [M:0;jenkins-hbase20:37517] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 16:58:42,608 INFO [M:0;jenkins-hbase20:37517] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 16:58:42,608 INFO [M:0;jenkins-hbase20:37517] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 16:58:42,609 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 16:58:42,609 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:42,609 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:42,609 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 16:58:42,609 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:42,609 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-05-24 16:58:42,616 INFO [M:0;jenkins-hbase20:37517] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cac24394a4e24a4dba142f9d4ea6d3d7 2023-05-24 16:58:42,620 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cac24394a4e24a4dba142f9d4ea6d3d7 as hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cac24394a4e24a4dba142f9d4ea6d3d7 2023-05-24 16:58:42,624 INFO [M:0;jenkins-hbase20:37517] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39627/user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cac24394a4e24a4dba142f9d4ea6d3d7, entries=8, sequenceid=66, filesize=6.3 K 2023-05-24 16:58:42,625 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 16ms, sequenceid=66, compaction requested=false 2023-05-24 16:58:42,626 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 16:58:42,626 DEBUG [M:0;jenkins-hbase20:37517] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 16:58:42,626 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/209f72ec-2d85-47f8-cdf5-017eb7c8a7e1/MasterData/WALs/jenkins-hbase20.apache.org,37517,1684947521239 2023-05-24 16:58:42,630 INFO [M:0;jenkins-hbase20:37517] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 16:58:42,630 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 16:58:42,630 INFO [M:0;jenkins-hbase20:37517] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37517 2023-05-24 16:58:42,632 DEBUG [M:0;jenkins-hbase20:37517] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,37517,1684947521239 already deleted, retry=false 2023-05-24 16:58:42,778 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:42,778 INFO [M:0;jenkins-hbase20:37517] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37517,1684947521239; zookeeper connection closed. 2023-05-24 16:58:42,779 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): master:37517-0x1017e68e6ed0000, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:42,879 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:42,879 DEBUG [Listener at localhost.localdomain/37067-EventThread] zookeeper.ZKWatcher(600): regionserver:33003-0x1017e68e6ed0001, quorum=127.0.0.1:55393, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 16:58:42,879 INFO [RS:0;jenkins-hbase20:33003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33003,1684947521297; zookeeper connection closed. 2023-05-24 16:58:42,879 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@62fd6630] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@62fd6630 2023-05-24 16:58:42,880 INFO [Listener at localhost.localdomain/37067] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 16:58:42,880 WARN [Listener at localhost.localdomain/37067] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:58:42,884 INFO [Listener at localhost.localdomain/37067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:58:42,988 WARN [BP-1712843542-148.251.75.209-1684947520790 heartbeating to localhost.localdomain/127.0.0.1:39627] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 16:58:42,988 WARN [BP-1712843542-148.251.75.209-1684947520790 heartbeating to localhost.localdomain/127.0.0.1:39627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1712843542-148.251.75.209-1684947520790 (Datanode Uuid 38dd14bb-459c-4336-813f-ea3867da4661) service to localhost.localdomain/127.0.0.1:39627 2023-05-24 16:58:42,989 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/dfs/data/data3/current/BP-1712843542-148.251.75.209-1684947520790] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:42,989 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/dfs/data/data4/current/BP-1712843542-148.251.75.209-1684947520790] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:42,990 WARN [Listener at localhost.localdomain/37067] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 16:58:42,993 INFO [Listener at localhost.localdomain/37067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 16:58:43,056 WARN [BP-1712843542-148.251.75.209-1684947520790 heartbeating to localhost.localdomain/127.0.0.1:39627] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1712843542-148.251.75.209-1684947520790 (Datanode Uuid 32c68eb5-c275-42dd-ba52-4547d6c51c9d) service to localhost.localdomain/127.0.0.1:39627 2023-05-24 16:58:43,057 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/dfs/data/data1/current/BP-1712843542-148.251.75.209-1684947520790] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:43,057 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/24db4238-7f44-9dcd-d4a2-1c27ac44b9f1/cluster_739aaac3-ca72-8174-1033-dd56ddd7710a/dfs/data/data2/current/BP-1712843542-148.251.75.209-1684947520790] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 16:58:43,103 INFO [Listener at localhost.localdomain/37067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 16:58:43,212 INFO [Listener at localhost.localdomain/37067] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 16:58:43,224 INFO [Listener at localhost.localdomain/37067] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 16:58:43,233 INFO [Listener at localhost.localdomain/37067] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=132 (was 108) - Thread LEAK? -, OpenFileDescriptor=558 (was 530) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=30 (was 30), ProcessCount=166 (was 166), AvailableMemoryMB=9422 (was 9429)