2023-05-30 19:55:24,406 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1 2023-05-30 19:55:24,419 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-30 19:55:24,452 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=219, ProcessCount=168, AvailableMemoryMB=4392 2023-05-30 19:55:24,459 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 19:55:24,459 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302, deleteOnExit=true 2023-05-30 19:55:24,459 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 19:55:24,460 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/test.cache.data in system properties and HBase conf 2023-05-30 19:55:24,460 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 19:55:24,461 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/hadoop.log.dir in system properties and HBase conf 2023-05-30 19:55:24,461 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 19:55:24,462 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 19:55:24,462 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 19:55:24,576 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-30 19:55:24,978 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 19:55:24,982 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:55:24,982 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:55:24,982 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 19:55:24,983 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:55:24,983 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 19:55:24,983 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 19:55:24,983 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:55:24,984 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:55:24,984 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 19:55:24,984 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/nfs.dump.dir in system properties and HBase conf 2023-05-30 19:55:24,984 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/java.io.tmpdir in system properties and HBase conf 2023-05-30 19:55:24,985 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:55:24,985 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 19:55:24,985 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 19:55:25,425 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:55:25,439 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:55:25,444 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:55:25,731 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-30 19:55:25,882 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-30 19:55:25,896 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:55:25,932 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:55:25,964 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/java.io.tmpdir/Jetty_localhost_45555_hdfs____p1tnha/webapp 2023-05-30 19:55:26,096 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45555 2023-05-30 19:55:26,105 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:55:26,115 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:55:26,116 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:55:26,584 WARN [Listener at localhost/43381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:55:26,649 WARN [Listener at localhost/43381] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:55:26,666 WARN [Listener at localhost/43381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:55:26,672 INFO [Listener at localhost/43381] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:55:26,676 INFO [Listener at localhost/43381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/java.io.tmpdir/Jetty_localhost_42715_datanode____.3wmtks/webapp 2023-05-30 19:55:26,778 INFO [Listener at localhost/43381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42715 2023-05-30 19:55:27,076 WARN [Listener at localhost/33307] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:55:27,091 WARN [Listener at localhost/33307] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:55:27,095 WARN [Listener at localhost/33307] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:55:27,097 INFO [Listener at localhost/33307] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:55:27,107 INFO [Listener at localhost/33307] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/java.io.tmpdir/Jetty_localhost_38047_datanode____.24x88b/webapp 2023-05-30 19:55:27,214 INFO [Listener at localhost/33307] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38047 2023-05-30 19:55:27,222 WARN [Listener at localhost/46567] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:55:27,520 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1211a4c90873f4a4: Processing first storage report for DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b from datanode 801bad2e-f13a-4493-a396-871c4cdcb881 2023-05-30 19:55:27,522 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1211a4c90873f4a4: from storage DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b node DatanodeRegistration(127.0.0.1:34581, datanodeUuid=801bad2e-f13a-4493-a396-871c4cdcb881, infoPort=40547, infoSecurePort=0, ipcPort=33307, storageInfo=lv=-57;cid=testClusterID;nsid=489305691;c=1685476525516), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-30 19:55:27,522 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7a8e8ee1791e58f4: Processing first storage report for DS-e7982d92-3046-4f02-92b5-ca1e9d84b495 from datanode 7b7a27f3-8739-4c5d-a410-b36edb3706e7 2023-05-30 19:55:27,522 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7a8e8ee1791e58f4: from storage DS-e7982d92-3046-4f02-92b5-ca1e9d84b495 node DatanodeRegistration(127.0.0.1:39201, datanodeUuid=7b7a27f3-8739-4c5d-a410-b36edb3706e7, infoPort=46243, infoSecurePort=0, ipcPort=46567, storageInfo=lv=-57;cid=testClusterID;nsid=489305691;c=1685476525516), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:55:27,522 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1211a4c90873f4a4: Processing first storage report for DS-aeea81aa-da35-4640-9ddd-d58da77b5f3d from datanode 801bad2e-f13a-4493-a396-871c4cdcb881 2023-05-30 19:55:27,522 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1211a4c90873f4a4: from storage DS-aeea81aa-da35-4640-9ddd-d58da77b5f3d node DatanodeRegistration(127.0.0.1:34581, datanodeUuid=801bad2e-f13a-4493-a396-871c4cdcb881, infoPort=40547, infoSecurePort=0, ipcPort=33307, storageInfo=lv=-57;cid=testClusterID;nsid=489305691;c=1685476525516), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:55:27,523 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7a8e8ee1791e58f4: Processing first storage report for DS-aefed681-a2c0-40de-bfa0-64f581674170 from datanode 7b7a27f3-8739-4c5d-a410-b36edb3706e7 2023-05-30 19:55:27,523 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7a8e8ee1791e58f4: from storage DS-aefed681-a2c0-40de-bfa0-64f581674170 node DatanodeRegistration(127.0.0.1:39201, datanodeUuid=7b7a27f3-8739-4c5d-a410-b36edb3706e7, infoPort=46243, infoSecurePort=0, ipcPort=46567, storageInfo=lv=-57;cid=testClusterID;nsid=489305691;c=1685476525516), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:55:27,613 DEBUG [Listener at localhost/46567] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1 2023-05-30 19:55:27,674 INFO [Listener at localhost/46567] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/zookeeper_0, clientPort=64488, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 19:55:27,691 INFO [Listener at localhost/46567] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64488 2023-05-30 19:55:27,702 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:27,704 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:28,368 INFO [Listener at localhost/46567] util.FSUtils(471): Created version file at hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f with version=8 2023-05-30 19:55:28,368 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 19:55:28,694 INFO [Listener at localhost/46567] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-30 19:55:29,189 INFO [Listener at localhost/46567] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:55:29,220 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,221 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,221 INFO [Listener at localhost/46567] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:55:29,221 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,221 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:55:29,359 INFO [Listener at localhost/46567] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:55:29,429 DEBUG [Listener at localhost/46567] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-30 19:55:29,525 INFO [Listener at localhost/46567] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33435 2023-05-30 19:55:29,535 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:29,536 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:29,556 INFO [Listener at localhost/46567] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33435 connecting to ZooKeeper ensemble=127.0.0.1:64488 2023-05-30 19:55:29,596 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:334350x0, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:55:29,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33435-0x1007da9512e0000 connected 2023-05-30 19:55:29,620 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:55:29,621 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:55:29,624 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:55:29,632 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33435 2023-05-30 19:55:29,633 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33435 2023-05-30 19:55:29,633 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33435 2023-05-30 19:55:29,634 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33435 2023-05-30 19:55:29,634 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33435 2023-05-30 19:55:29,640 INFO [Listener at localhost/46567] master.HMaster(444): hbase.rootdir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f, hbase.cluster.distributed=false 2023-05-30 19:55:29,704 INFO [Listener at localhost/46567] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:55:29,704 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,705 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,705 INFO [Listener at localhost/46567] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:55:29,705 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:55:29,705 INFO [Listener at localhost/46567] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:55:29,710 INFO [Listener at localhost/46567] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:55:29,712 INFO [Listener at localhost/46567] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37959 2023-05-30 19:55:29,714 INFO [Listener at localhost/46567] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:55:29,720 DEBUG [Listener at localhost/46567] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:55:29,721 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:29,723 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:29,725 INFO [Listener at localhost/46567] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37959 connecting to ZooKeeper ensemble=127.0.0.1:64488 2023-05-30 19:55:29,728 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:379590x0, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:55:29,729 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37959-0x1007da9512e0001 connected 2023-05-30 19:55:29,729 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:55:29,730 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:55:29,731 DEBUG [Listener at localhost/46567] zookeeper.ZKUtil(164): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:55:29,732 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37959 2023-05-30 19:55:29,732 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37959 2023-05-30 19:55:29,732 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37959 2023-05-30 19:55:29,733 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37959 2023-05-30 19:55:29,733 DEBUG [Listener at localhost/46567] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37959 2023-05-30 19:55:29,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:29,743 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:55:29,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:29,762 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:55:29,762 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:55:29,762 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:29,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:55:29,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:55:29,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33435,1685476528514 from backup master directory 2023-05-30 19:55:29,773 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:29,773 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:55:29,773 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:55:29,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:29,776 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-30 19:55:29,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-30 19:55:29,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase.id with ID: 47d1898d-4c4d-4956-aef7-f5f0453b3eb3 2023-05-30 19:55:29,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:29,916 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:29,957 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5a52d554 to 127.0.0.1:64488 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:55:29,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@717c0bc4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:55:30,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:55:30,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 19:55:30,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:55:30,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store-tmp 2023-05-30 19:55:30,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:30,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:55:30,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:55:30,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:55:30,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:55:30,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:55:30,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:55:30,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:55:30,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/WALs/jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:30,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33435%2C1685476528514, suffix=, logDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/WALs/jenkins-hbase4.apache.org,33435,1685476528514, archiveDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/oldWALs, maxLogs=10 2023-05-30 19:55:30,121 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:55:30,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/WALs/jenkins-hbase4.apache.org,33435,1685476528514/jenkins-hbase4.apache.org%2C33435%2C1685476528514.1685476530119 2023-05-30 19:55:30,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:55:30,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:55:30,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:30,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,200 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,207 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 19:55:30,232 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 19:55:30,244 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:30,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:55:30,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:55:30,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=820124, jitterRate=0.04284191131591797}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:55:30,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:55:30,276 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 19:55:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 19:55:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 19:55:30,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 19:55:30,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 4 msec 2023-05-30 19:55:30,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-05-30 19:55:30,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 19:55:30,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 19:55:30,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 19:55:30,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 19:55:30,411 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 19:55:30,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 19:55:30,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 19:55:30,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 19:55:30,425 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:30,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 19:55:30,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 19:55:30,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 19:55:30,443 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:55:30,443 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:55:30,443 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:30,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33435,1685476528514, sessionid=0x1007da9512e0000, setting cluster-up flag (Was=false) 2023-05-30 19:55:30,458 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:30,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 19:55:30,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:30,469 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:30,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 19:55:30,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:30,479 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.hbase-snapshot/.tmp 2023-05-30 19:55:30,537 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(951): ClusterId : 47d1898d-4c4d-4956-aef7-f5f0453b3eb3 2023-05-30 19:55:30,542 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:55:30,550 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:55:30,550 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:55:30,554 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:55:30,555 DEBUG [RS:0;jenkins-hbase4:37959] zookeeper.ReadOnlyZKClient(139): Connect 0x6c04b001 to 127.0.0.1:64488 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:55:30,560 DEBUG [RS:0;jenkins-hbase4:37959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62c353f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:55:30,561 DEBUG [RS:0;jenkins-hbase4:37959] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66fcdb71, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:55:30,596 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37959 2023-05-30 19:55:30,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 19:55:30,601 INFO [RS:0;jenkins-hbase4:37959] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:55:30,601 INFO [RS:0;jenkins-hbase4:37959] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:55:30,601 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:55:30,605 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,33435,1685476528514 with isa=jenkins-hbase4.apache.org/172.31.14.131:37959, startcode=1685476529703 2023-05-30 19:55:30,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:55:30,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:55:30,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:55:30,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:55:30,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 19:55:30,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:55:30,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476560619 2023-05-30 19:55:30,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 19:55:30,625 DEBUG [RS:0;jenkins-hbase4:37959] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:55:30,626 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:55:30,626 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 19:55:30,632 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:55:30,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 19:55:30,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 19:55:30,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 19:55:30,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 19:55:30,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 19:55:30,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 19:55:30,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 19:55:30,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 19:55:30,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 19:55:30,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 19:55:30,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476530657,5,FailOnTimeoutGroup] 2023-05-30 19:55:30,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476530658,5,FailOnTimeoutGroup] 2023-05-30 19:55:30,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 19:55:30,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,704 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:55:30,706 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:55:30,706 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f 2023-05-30 19:55:30,745 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:30,748 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:55:30,752 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/info 2023-05-30 19:55:30,753 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:55:30,755 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:30,755 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:55:30,758 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:55:30,759 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:55:30,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:30,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:55:30,762 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/table 2023-05-30 19:55:30,763 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:55:30,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:30,765 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740 2023-05-30 19:55:30,766 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740 2023-05-30 19:55:30,771 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:55:30,775 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:55:30,781 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59933, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:55:30,783 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:55:30,784 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=760056, jitterRate=-0.033539190888404846}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:55:30,785 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:55:30,785 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:55:30,785 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:55:30,785 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:55:30,785 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:55:30,785 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:55:30,786 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:55:30,787 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:55:30,792 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:55:30,792 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 19:55:30,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 19:55:30,818 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 19:55:30,819 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f 2023-05-30 19:55:30,819 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43381 2023-05-30 19:55:30,819 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:55:30,825 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 19:55:30,827 DEBUG [RS:0;jenkins-hbase4:37959] zookeeper.ZKUtil(162): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,828 WARN [RS:0;jenkins-hbase4:37959] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:55:30,828 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:55:30,829 INFO [RS:0;jenkins-hbase4:37959] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:55:30,829 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,832 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37959,1685476529703] 2023-05-30 19:55:30,841 DEBUG [RS:0;jenkins-hbase4:37959] zookeeper.ZKUtil(162): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,851 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:55:30,859 INFO [RS:0;jenkins-hbase4:37959] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:55:30,877 INFO [RS:0;jenkins-hbase4:37959] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:55:30,881 INFO [RS:0;jenkins-hbase4:37959] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:55:30,881 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,882 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:55:30,888 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,888 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,889 DEBUG [RS:0;jenkins-hbase4:37959] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:55:30,890 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,890 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,890 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,906 INFO [RS:0;jenkins-hbase4:37959] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:55:30,908 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37959,1685476529703-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:30,926 INFO [RS:0;jenkins-hbase4:37959] regionserver.Replication(203): jenkins-hbase4.apache.org,37959,1685476529703 started 2023-05-30 19:55:30,926 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37959,1685476529703, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37959, sessionid=0x1007da9512e0001 2023-05-30 19:55:30,926 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:55:30,926 DEBUG [RS:0;jenkins-hbase4:37959] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,927 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37959,1685476529703' 2023-05-30 19:55:30,927 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:55:30,927 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:55:30,928 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:55:30,928 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:55:30,928 DEBUG [RS:0;jenkins-hbase4:37959] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:30,928 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37959,1685476529703' 2023-05-30 19:55:30,928 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:55:30,929 DEBUG [RS:0;jenkins-hbase4:37959] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:55:30,929 DEBUG [RS:0;jenkins-hbase4:37959] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:55:30,930 INFO [RS:0;jenkins-hbase4:37959] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:55:30,930 INFO [RS:0;jenkins-hbase4:37959] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:55:30,977 DEBUG [jenkins-hbase4:33435] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 19:55:30,979 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37959,1685476529703, state=OPENING 2023-05-30 19:55:30,986 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 19:55:30,988 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:30,988 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:55:30,991 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37959,1685476529703}] 2023-05-30 19:55:31,040 INFO [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37959%2C1685476529703, suffix=, logDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703, archiveDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/oldWALs, maxLogs=32 2023-05-30 19:55:31,053 INFO [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476531042 2023-05-30 19:55:31,053 DEBUG [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:55:31,174 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:31,176 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 19:55:31,180 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42092, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 19:55:31,192 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 19:55:31,193 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:55:31,196 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37959%2C1685476529703.meta, suffix=.meta, logDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703, archiveDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/oldWALs, maxLogs=32 2023-05-30 19:55:31,210 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.meta.1685476531198.meta 2023-05-30 19:55:31,210 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:55:31,210 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:55:31,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 19:55:31,228 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 19:55:31,233 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 19:55:31,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 19:55:31,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:31,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 19:55:31,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 19:55:31,241 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:55:31,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/info 2023-05-30 19:55:31,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/info 2023-05-30 19:55:31,243 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:55:31,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:31,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:55:31,246 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:55:31,246 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:55:31,246 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:55:31,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:31,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:55:31,248 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/table 2023-05-30 19:55:31,248 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/table 2023-05-30 19:55:31,249 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:55:31,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:31,251 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740 2023-05-30 19:55:31,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740 2023-05-30 19:55:31,257 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:55:31,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:55:31,261 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=780805, jitterRate=-0.007156014442443848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:55:31,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:55:31,271 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476531166 2023-05-30 19:55:31,289 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 19:55:31,289 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 19:55:31,290 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37959,1685476529703, state=OPEN 2023-05-30 19:55:31,292 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 19:55:31,292 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:55:31,297 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 19:55:31,297 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37959,1685476529703 in 301 msec 2023-05-30 19:55:31,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 19:55:31,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 497 msec 2023-05-30 19:55:31,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 787 msec 2023-05-30 19:55:31,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476531308, completionTime=-1 2023-05-30 19:55:31,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 19:55:31,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 19:55:31,371 DEBUG [hconnection-0x2d0e4886-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:55:31,373 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:55:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 19:55:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476591390 2023-05-30 19:55:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685476651390 2023-05-30 19:55:31,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 80 msec 2023-05-30 19:55:31,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33435,1685476528514-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:31,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33435,1685476528514-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:31,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33435,1685476528514-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:31,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33435, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:31,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 19:55:31,420 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 19:55:31,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 19:55:31,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:55:31,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 19:55:31,442 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:55:31,444 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:55:31,466 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,468 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd empty. 2023-05-30 19:55:31,468 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,469 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 19:55:31,539 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 19:55:31,541 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c2cecb239c2d8bcaf59c9c3762fe96cd, NAME => 'hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp 2023-05-30 19:55:31,555 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:31,556 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c2cecb239c2d8bcaf59c9c3762fe96cd, disabling compactions & flushes 2023-05-30 19:55:31,556 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,556 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,556 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. after waiting 0 ms 2023-05-30 19:55:31,556 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,556 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,556 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c2cecb239c2d8bcaf59c9c3762fe96cd: 2023-05-30 19:55:31,560 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:55:31,575 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476531563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476531563"}]},"ts":"1685476531563"} 2023-05-30 19:55:31,599 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:55:31,601 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:55:31,605 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476531601"}]},"ts":"1685476531601"} 2023-05-30 19:55:31,610 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 19:55:31,618 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2cecb239c2d8bcaf59c9c3762fe96cd, ASSIGN}] 2023-05-30 19:55:31,621 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2cecb239c2d8bcaf59c9c3762fe96cd, ASSIGN 2023-05-30 19:55:31,622 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c2cecb239c2d8bcaf59c9c3762fe96cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37959,1685476529703; forceNewPlan=false, retain=false 2023-05-30 19:55:31,774 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2cecb239c2d8bcaf59c9c3762fe96cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:31,774 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476531773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476531773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476531773"}]},"ts":"1685476531773"} 2023-05-30 19:55:31,778 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c2cecb239c2d8bcaf59c9c3762fe96cd, server=jenkins-hbase4.apache.org,37959,1685476529703}] 2023-05-30 19:55:31,939 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2cecb239c2d8bcaf59c9c3762fe96cd, NAME => 'hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:55:31,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:31,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,944 INFO [StoreOpener-c2cecb239c2d8bcaf59c9c3762fe96cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,946 DEBUG [StoreOpener-c2cecb239c2d8bcaf59c9c3762fe96cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/info 2023-05-30 19:55:31,946 DEBUG [StoreOpener-c2cecb239c2d8bcaf59c9c3762fe96cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/info 2023-05-30 19:55:31,947 INFO [StoreOpener-c2cecb239c2d8bcaf59c9c3762fe96cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2cecb239c2d8bcaf59c9c3762fe96cd columnFamilyName info 2023-05-30 19:55:31,947 INFO [StoreOpener-c2cecb239c2d8bcaf59c9c3762fe96cd-1] regionserver.HStore(310): Store=c2cecb239c2d8bcaf59c9c3762fe96cd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:31,949 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,950 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,954 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:55:31,957 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:55:31,958 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2cecb239c2d8bcaf59c9c3762fe96cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872664, jitterRate=0.10965074598789215}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:55:31,958 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2cecb239c2d8bcaf59c9c3762fe96cd: 2023-05-30 19:55:31,960 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd., pid=6, masterSystemTime=1685476531932 2023-05-30 19:55:31,964 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,964 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:55:31,966 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2cecb239c2d8bcaf59c9c3762fe96cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:31,966 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476531965"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476531965"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476531965"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476531965"}]},"ts":"1685476531965"} 2023-05-30 19:55:31,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 19:55:31,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c2cecb239c2d8bcaf59c9c3762fe96cd, server=jenkins-hbase4.apache.org,37959,1685476529703 in 192 msec 2023-05-30 19:55:31,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 19:55:31,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c2cecb239c2d8bcaf59c9c3762fe96cd, ASSIGN in 356 msec 2023-05-30 19:55:31,979 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:55:31,979 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476531979"}]},"ts":"1685476531979"} 2023-05-30 19:55:31,982 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 19:55:31,985 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:55:31,988 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 554 msec 2023-05-30 19:55:32,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 19:55:32,044 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:55:32,044 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:32,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 19:55:32,099 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:55:32,107 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-05-30 19:55:32,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 19:55:32,129 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:55:32,134 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-05-30 19:55:32,143 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 19:55:32,145 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 19:55:32,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.371sec 2023-05-30 19:55:32,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 19:55:32,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 19:55:32,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 19:55:32,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33435,1685476528514-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 19:55:32,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33435,1685476528514-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 19:55:32,161 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 19:55:32,242 DEBUG [Listener at localhost/46567] zookeeper.ReadOnlyZKClient(139): Connect 0x340cb75f to 127.0.0.1:64488 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:55:32,247 DEBUG [Listener at localhost/46567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@760814a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:55:32,259 DEBUG [hconnection-0x442d94d1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:55:32,270 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42110, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:55:32,280 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:55:32,280 INFO [Listener at localhost/46567] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:55:32,287 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 19:55:32,287 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:55:32,288 INFO [Listener at localhost/46567] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 19:55:32,296 DEBUG [Listener at localhost/46567] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-30 19:55:32,300 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58910, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-30 19:55:32,308 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-30 19:55:32,308 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-30 19:55:32,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:55:32,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-30 19:55:32,316 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:55:32,319 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:55:32,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-30 19:55:32,323 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,324 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565 empty. 2023-05-30 19:55:32,326 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,326 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-30 19:55:32,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:55:32,348 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-30 19:55:32,350 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 108d4cff8c1abd305009a12207f42565, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/.tmp 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 108d4cff8c1abd305009a12207f42565, disabling compactions & flushes 2023-05-30 19:55:32,365 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. after waiting 0 ms 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,365 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,365 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:55:32,369 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:55:32,371 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685476532371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476532371"}]},"ts":"1685476532371"} 2023-05-30 19:55:32,374 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:55:32,376 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:55:32,376 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476532376"}]},"ts":"1685476532376"} 2023-05-30 19:55:32,378 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-30 19:55:32,383 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=108d4cff8c1abd305009a12207f42565, ASSIGN}] 2023-05-30 19:55:32,385 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=108d4cff8c1abd305009a12207f42565, ASSIGN 2023-05-30 19:55:32,386 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=108d4cff8c1abd305009a12207f42565, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37959,1685476529703; forceNewPlan=false, retain=false 2023-05-30 19:55:32,537 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=108d4cff8c1abd305009a12207f42565, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:32,538 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685476532537"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476532537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476532537"}]},"ts":"1685476532537"} 2023-05-30 19:55:32,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 108d4cff8c1abd305009a12207f42565, server=jenkins-hbase4.apache.org,37959,1685476529703}] 2023-05-30 19:55:32,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 108d4cff8c1abd305009a12207f42565, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:55:32,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:55:32,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,703 INFO [StoreOpener-108d4cff8c1abd305009a12207f42565-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,705 DEBUG [StoreOpener-108d4cff8c1abd305009a12207f42565-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info 2023-05-30 19:55:32,705 DEBUG [StoreOpener-108d4cff8c1abd305009a12207f42565-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info 2023-05-30 19:55:32,706 INFO [StoreOpener-108d4cff8c1abd305009a12207f42565-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 108d4cff8c1abd305009a12207f42565 columnFamilyName info 2023-05-30 19:55:32,707 INFO [StoreOpener-108d4cff8c1abd305009a12207f42565-1] regionserver.HStore(310): Store=108d4cff8c1abd305009a12207f42565/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:55:32,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:32,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:55:32,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 108d4cff8c1abd305009a12207f42565; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=821272, jitterRate=0.0443020761013031}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:55:32,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:55:32,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565., pid=11, masterSystemTime=1685476532694 2023-05-30 19:55:32,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,724 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:32,724 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=108d4cff8c1abd305009a12207f42565, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:55:32,725 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685476532724"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476532724"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476532724"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476532724"}]},"ts":"1685476532724"} 2023-05-30 19:55:32,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-30 19:55:32,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 108d4cff8c1abd305009a12207f42565, server=jenkins-hbase4.apache.org,37959,1685476529703 in 188 msec 2023-05-30 19:55:32,736 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-30 19:55:32,737 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=108d4cff8c1abd305009a12207f42565, ASSIGN in 349 msec 2023-05-30 19:55:32,738 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:55:32,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476532738"}]},"ts":"1685476532738"} 2023-05-30 19:55:32,740 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-30 19:55:32,743 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:55:32,746 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 431 msec 2023-05-30 19:55:36,747 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-30 19:55:36,856 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-30 19:55:36,858 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-30 19:55:36,858 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-30 19:55:38,690 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 19:55:38,691 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-30 19:55:42,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33435] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:55:42,341 INFO [Listener at localhost/46567] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-30 19:55:42,345 DEBUG [Listener at localhost/46567] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-30 19:55:42,346 DEBUG [Listener at localhost/46567] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:55:54,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37959] regionserver.HRegion(9158): Flush requested on 108d4cff8c1abd305009a12207f42565 2023-05-30 19:55:54,373 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 108d4cff8c1abd305009a12207f42565 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:55:54,442 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/a8f6b66619004baeac3c378bfe9e578a 2023-05-30 19:55:54,490 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/a8f6b66619004baeac3c378bfe9e578a as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a 2023-05-30 19:55:54,500 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a, entries=7, sequenceid=11, filesize=12.1 K 2023-05-30 19:55:54,502 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 108d4cff8c1abd305009a12207f42565 in 129ms, sequenceid=11, compaction requested=false 2023-05-30 19:55:54,503 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:02,584 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:04,788 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:06,990 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:09,193 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:09,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37959] regionserver.HRegion(9158): Flush requested on 108d4cff8c1abd305009a12207f42565 2023-05-30 19:56:09,194 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 108d4cff8c1abd305009a12207f42565 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:56:09,395 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:09,413 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/e3caa7f77b084e97899e33810f816efd 2023-05-30 19:56:09,423 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/e3caa7f77b084e97899e33810f816efd as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd 2023-05-30 19:56:09,432 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd, entries=7, sequenceid=21, filesize=12.1 K 2023-05-30 19:56:09,633 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:09,634 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 108d4cff8c1abd305009a12207f42565 in 439ms, sequenceid=21, compaction requested=false 2023-05-30 19:56:09,634 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:09,634 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-30 19:56:09,634 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:56:09,636 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a because midkey is the same as first or last row 2023-05-30 19:56:11,397 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:13,599 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:13,601 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37959%2C1685476529703:(num 1685476531042) roll requested 2023-05-30 19:56:13,601 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:13,813 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK], DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK]] 2023-05-30 19:56:13,815 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476531042 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476573601 2023-05-30 19:56:13,815 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:13,816 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476531042 is not closed yet, will try archiving it next time 2023-05-30 19:56:23,613 INFO [Listener at localhost/46567] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-30 19:56:28,616 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:28,616 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:28,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37959] regionserver.HRegion(9158): Flush requested on 108d4cff8c1abd305009a12207f42565 2023-05-30 19:56:28,616 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C37959%2C1685476529703:(num 1685476573601) roll requested 2023-05-30 19:56:28,616 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 108d4cff8c1abd305009a12207f42565 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:56:30,617 INFO [Listener at localhost/46567] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-30 19:56:33,617 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:33,618 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:33,630 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:33,630 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:33,631 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476573601 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476588616 2023-05-30 19:56:33,631 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39201,DS-e7982d92-3046-4f02-92b5-ca1e9d84b495,DISK], DatanodeInfoWithStorage[127.0.0.1:34581,DS-3fbcf394-cfa4-4fde-9ec7-f62e430cb43b,DISK]] 2023-05-30 19:56:33,631 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476573601 is not closed yet, will try archiving it next time 2023-05-30 19:56:33,638 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/8297bc1679174e9f8a0628365301c81c 2023-05-30 19:56:33,647 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/8297bc1679174e9f8a0628365301c81c as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c 2023-05-30 19:56:33,655 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c, entries=7, sequenceid=31, filesize=12.1 K 2023-05-30 19:56:33,659 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 108d4cff8c1abd305009a12207f42565 in 5043ms, sequenceid=31, compaction requested=true 2023-05-30 19:56:33,659 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:33,659 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-30 19:56:33,659 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:56:33,659 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a because midkey is the same as first or last row 2023-05-30 19:56:33,661 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 19:56:33,661 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 19:56:33,665 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 19:56:33,667 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HStore(1912): 108d4cff8c1abd305009a12207f42565/info is initiating minor compaction (all files) 2023-05-30 19:56:33,667 INFO [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 108d4cff8c1abd305009a12207f42565/info in TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:33,667 INFO [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c] into tmpdir=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp, totalSize=36.3 K 2023-05-30 19:56:33,668 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] compactions.Compactor(207): Compacting a8f6b66619004baeac3c378bfe9e578a, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685476542351 2023-05-30 19:56:33,669 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] compactions.Compactor(207): Compacting e3caa7f77b084e97899e33810f816efd, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685476556374 2023-05-30 19:56:33,670 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] compactions.Compactor(207): Compacting 8297bc1679174e9f8a0628365301c81c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685476571195 2023-05-30 19:56:33,694 INFO [RS:0;jenkins-hbase4:37959-shortCompactions-0] throttle.PressureAwareThroughputController(145): 108d4cff8c1abd305009a12207f42565#info#compaction#3 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 19:56:33,713 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/e08a6bb777e64051b3ebc92734463185 as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e08a6bb777e64051b3ebc92734463185 2023-05-30 19:56:33,729 INFO [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 108d4cff8c1abd305009a12207f42565/info of 108d4cff8c1abd305009a12207f42565 into e08a6bb777e64051b3ebc92734463185(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 19:56:33,730 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:33,730 INFO [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565., storeName=108d4cff8c1abd305009a12207f42565/info, priority=13, startTime=1685476593661; duration=0sec 2023-05-30 19:56:33,731 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-30 19:56:33,731 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:56:33,731 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e08a6bb777e64051b3ebc92734463185 because midkey is the same as first or last row 2023-05-30 19:56:33,731 DEBUG [RS:0;jenkins-hbase4:37959-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 19:56:34,040 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476573601 to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/oldWALs/jenkins-hbase4.apache.org%2C37959%2C1685476529703.1685476573601 2023-05-30 19:56:45,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37959] regionserver.HRegion(9158): Flush requested on 108d4cff8c1abd305009a12207f42565 2023-05-30 19:56:45,738 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 108d4cff8c1abd305009a12207f42565 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:56:45,755 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/f0fd5d5148314f80bdb2ee24c98f7857 2023-05-30 19:56:45,763 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/f0fd5d5148314f80bdb2ee24c98f7857 as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/f0fd5d5148314f80bdb2ee24c98f7857 2023-05-30 19:56:45,770 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/f0fd5d5148314f80bdb2ee24c98f7857, entries=7, sequenceid=42, filesize=12.1 K 2023-05-30 19:56:45,772 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 108d4cff8c1abd305009a12207f42565 in 33ms, sequenceid=42, compaction requested=false 2023-05-30 19:56:45,772 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:45,772 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-30 19:56:45,772 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:56:45,772 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e08a6bb777e64051b3ebc92734463185 because midkey is the same as first or last row 2023-05-30 19:56:53,747 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 19:56:53,747 INFO [Listener at localhost/46567] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-30 19:56:53,748 DEBUG [Listener at localhost/46567] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x340cb75f to 127.0.0.1:64488 2023-05-30 19:56:53,748 DEBUG [Listener at localhost/46567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:56:53,749 DEBUG [Listener at localhost/46567] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 19:56:53,749 DEBUG [Listener at localhost/46567] util.JVMClusterUtil(257): Found active master hash=158975320, stopped=false 2023-05-30 19:56:53,749 INFO [Listener at localhost/46567] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:56:53,751 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:56:53,751 INFO [Listener at localhost/46567] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 19:56:53,751 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:53,751 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:56:53,751 DEBUG [Listener at localhost/46567] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a52d554 to 127.0.0.1:64488 2023-05-30 19:56:53,752 DEBUG [Listener at localhost/46567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:56:53,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:56:53,752 INFO [Listener at localhost/46567] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37959,1685476529703' ***** 2023-05-30 19:56:53,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:56:53,752 INFO [Listener at localhost/46567] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 19:56:53,752 INFO [RS:0;jenkins-hbase4:37959] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 19:56:53,753 INFO [RS:0;jenkins-hbase4:37959] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 19:56:53,753 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 19:56:53,753 INFO [RS:0;jenkins-hbase4:37959] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 19:56:53,753 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(3303): Received CLOSE for c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:56:53,754 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(3303): Received CLOSE for 108d4cff8c1abd305009a12207f42565 2023-05-30 19:56:53,754 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:56:53,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2cecb239c2d8bcaf59c9c3762fe96cd, disabling compactions & flushes 2023-05-30 19:56:53,755 DEBUG [RS:0;jenkins-hbase4:37959] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6c04b001 to 127.0.0.1:64488 2023-05-30 19:56:53,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:56:53,755 DEBUG [RS:0;jenkins-hbase4:37959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:56:53,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:56:53,755 INFO [RS:0;jenkins-hbase4:37959] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 19:56:53,755 INFO [RS:0;jenkins-hbase4:37959] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 19:56:53,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. after waiting 0 ms 2023-05-30 19:56:53,755 INFO [RS:0;jenkins-hbase4:37959] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 19:56:53,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:56:53,755 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 19:56:53,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c2cecb239c2d8bcaf59c9c3762fe96cd 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-30 19:56:53,755 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-30 19:56:53,756 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1478): Online Regions={c2cecb239c2d8bcaf59c9c3762fe96cd=hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd., 1588230740=hbase:meta,,1.1588230740, 108d4cff8c1abd305009a12207f42565=TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.} 2023-05-30 19:56:53,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:56:53,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:56:53,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:56:53,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:56:53,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:56:53,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-30 19:56:53,758 DEBUG [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1504): Waiting on 108d4cff8c1abd305009a12207f42565, 1588230740, c2cecb239c2d8bcaf59c9c3762fe96cd 2023-05-30 19:56:53,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/.tmp/info/2fa9b3368fa949c88a6816e308fbbd32 2023-05-30 19:56:53,780 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/.tmp/info/43417f75fa08456181597d3167c14e5c 2023-05-30 19:56:53,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/.tmp/info/2fa9b3368fa949c88a6816e308fbbd32 as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/info/2fa9b3368fa949c88a6816e308fbbd32 2023-05-30 19:56:53,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/info/2fa9b3368fa949c88a6816e308fbbd32, entries=2, sequenceid=6, filesize=4.8 K 2023-05-30 19:56:53,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c2cecb239c2d8bcaf59c9c3762fe96cd in 45ms, sequenceid=6, compaction requested=false 2023-05-30 19:56:53,801 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/.tmp/table/a886518dfcf2431f868070082c95f016 2023-05-30 19:56:53,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/namespace/c2cecb239c2d8bcaf59c9c3762fe96cd/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-30 19:56:53,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:56:53,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2cecb239c2d8bcaf59c9c3762fe96cd: 2023-05-30 19:56:53,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685476531429.c2cecb239c2d8bcaf59c9c3762fe96cd. 2023-05-30 19:56:53,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 108d4cff8c1abd305009a12207f42565, disabling compactions & flushes 2023-05-30 19:56:53,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:53,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:53,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. after waiting 0 ms 2023-05-30 19:56:53,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:53,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 108d4cff8c1abd305009a12207f42565 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-30 19:56:53,812 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/.tmp/info/43417f75fa08456181597d3167c14e5c as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/info/43417f75fa08456181597d3167c14e5c 2023-05-30 19:56:53,824 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/info/43417f75fa08456181597d3167c14e5c, entries=20, sequenceid=14, filesize=7.4 K 2023-05-30 19:56:53,827 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/.tmp/table/a886518dfcf2431f868070082c95f016 as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/table/a886518dfcf2431f868070082c95f016 2023-05-30 19:56:53,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/27105729e7434f6eae29a9251692309b 2023-05-30 19:56:53,835 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/table/a886518dfcf2431f868070082c95f016, entries=4, sequenceid=14, filesize=4.8 K 2023-05-30 19:56:53,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/.tmp/info/27105729e7434f6eae29a9251692309b as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/27105729e7434f6eae29a9251692309b 2023-05-30 19:56:53,837 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 80ms, sequenceid=14, compaction requested=false 2023-05-30 19:56:53,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/27105729e7434f6eae29a9251692309b, entries=3, sequenceid=48, filesize=7.9 K 2023-05-30 19:56:53,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 108d4cff8c1abd305009a12207f42565 in 39ms, sequenceid=48, compaction requested=true 2023-05-30 19:56:53,852 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c] to archive 2023-05-30 19:56:53,854 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-30 19:56:53,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-30 19:56:53,857 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 19:56:53,858 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:56:53,858 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:56:53,859 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-30 19:56:53,860 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/a8f6b66619004baeac3c378bfe9e578a 2023-05-30 19:56:53,862 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/e3caa7f77b084e97899e33810f816efd 2023-05-30 19:56:53,864 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/archive/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/info/8297bc1679174e9f8a0628365301c81c 2023-05-30 19:56:53,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/data/default/TestLogRolling-testSlowSyncLogRolling/108d4cff8c1abd305009a12207f42565/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-30 19:56:53,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:53,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 108d4cff8c1abd305009a12207f42565: 2023-05-30 19:56:53,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685476532308.108d4cff8c1abd305009a12207f42565. 2023-05-30 19:56:53,918 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-30 19:56:53,919 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-30 19:56:53,958 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37959,1685476529703; all regions closed. 2023-05-30 19:56:53,959 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:56:53,966 DEBUG [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/oldWALs 2023-05-30 19:56:53,966 INFO [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37959%2C1685476529703.meta:.meta(num 1685476531198) 2023-05-30 19:56:53,966 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/WALs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:56:53,975 DEBUG [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/oldWALs 2023-05-30 19:56:53,975 INFO [RS:0;jenkins-hbase4:37959] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37959%2C1685476529703:(num 1685476588616) 2023-05-30 19:56:53,975 DEBUG [RS:0;jenkins-hbase4:37959] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:56:53,975 INFO [RS:0;jenkins-hbase4:37959] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:56:53,976 INFO [RS:0;jenkins-hbase4:37959] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-30 19:56:53,976 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:56:53,977 INFO [RS:0;jenkins-hbase4:37959] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37959 2023-05-30 19:56:53,983 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37959,1685476529703 2023-05-30 19:56:53,983 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:56:53,983 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:56:53,985 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37959,1685476529703] 2023-05-30 19:56:53,985 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37959,1685476529703; numProcessing=1 2023-05-30 19:56:53,986 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37959,1685476529703 already deleted, retry=false 2023-05-30 19:56:53,987 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37959,1685476529703 expired; onlineServers=0 2023-05-30 19:56:53,987 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33435,1685476528514' ***** 2023-05-30 19:56:53,987 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 19:56:53,987 DEBUG [M:0;jenkins-hbase4:33435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4844cf63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:56:53,987 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:56:53,987 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33435,1685476528514; all regions closed. 2023-05-30 19:56:53,987 DEBUG [M:0;jenkins-hbase4:33435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:56:53,987 DEBUG [M:0;jenkins-hbase4:33435] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 19:56:53,988 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 19:56:53,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476530657] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476530657,5,FailOnTimeoutGroup] 2023-05-30 19:56:53,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476530658] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476530658,5,FailOnTimeoutGroup] 2023-05-30 19:56:53,988 DEBUG [M:0;jenkins-hbase4:33435] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 19:56:53,990 INFO [M:0;jenkins-hbase4:33435] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 19:56:53,990 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 19:56:53,990 INFO [M:0;jenkins-hbase4:33435] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 19:56:53,990 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:53,990 INFO [M:0;jenkins-hbase4:33435] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 19:56:53,991 DEBUG [M:0;jenkins-hbase4:33435] master.HMaster(1512): Stopping service threads 2023-05-30 19:56:53,991 INFO [M:0;jenkins-hbase4:33435] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 19:56:53,991 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:56:53,992 INFO [M:0;jenkins-hbase4:33435] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 19:56:53,992 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 19:56:53,993 DEBUG [M:0;jenkins-hbase4:33435] zookeeper.ZKUtil(398): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 19:56:53,993 WARN [M:0;jenkins-hbase4:33435] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 19:56:53,993 INFO [M:0;jenkins-hbase4:33435] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 19:56:53,993 INFO [M:0;jenkins-hbase4:33435] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 19:56:53,994 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:56:53,994 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:53,994 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:53,994 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:56:53,994 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:53,994 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.28 KB heapSize=46.71 KB 2023-05-30 19:56:54,010 INFO [M:0;jenkins-hbase4:33435] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.28 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/41f2cd514c12407a9ac1abd3caf45fa6 2023-05-30 19:56:54,016 INFO [M:0;jenkins-hbase4:33435] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41f2cd514c12407a9ac1abd3caf45fa6 2023-05-30 19:56:54,017 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/41f2cd514c12407a9ac1abd3caf45fa6 as hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/41f2cd514c12407a9ac1abd3caf45fa6 2023-05-30 19:56:54,024 INFO [M:0;jenkins-hbase4:33435] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 41f2cd514c12407a9ac1abd3caf45fa6 2023-05-30 19:56:54,024 INFO [M:0;jenkins-hbase4:33435] regionserver.HStore(1080): Added hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/41f2cd514c12407a9ac1abd3caf45fa6, entries=11, sequenceid=100, filesize=6.1 K 2023-05-30 19:56:54,025 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegion(2948): Finished flush of dataSize ~38.28 KB/39196, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=100, compaction requested=false 2023-05-30 19:56:54,026 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:54,027 DEBUG [M:0;jenkins-hbase4:33435] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:56:54,027 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/MasterData/WALs/jenkins-hbase4.apache.org,33435,1685476528514 2023-05-30 19:56:54,031 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:56:54,031 INFO [M:0;jenkins-hbase4:33435] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 19:56:54,032 INFO [M:0;jenkins-hbase4:33435] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33435 2023-05-30 19:56:54,035 DEBUG [M:0;jenkins-hbase4:33435] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33435,1685476528514 already deleted, retry=false 2023-05-30 19:56:54,085 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:56:54,085 INFO [RS:0;jenkins-hbase4:37959] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37959,1685476529703; zookeeper connection closed. 2023-05-30 19:56:54,085 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): regionserver:37959-0x1007da9512e0001, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:56:54,111 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7d08aee0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7d08aee0 2023-05-30 19:56:54,111 INFO [Listener at localhost/46567] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-30 19:56:54,185 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:56:54,185 INFO [M:0;jenkins-hbase4:33435] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33435,1685476528514; zookeeper connection closed. 2023-05-30 19:56:54,185 DEBUG [Listener at localhost/46567-EventThread] zookeeper.ZKWatcher(600): master:33435-0x1007da9512e0000, quorum=127.0.0.1:64488, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:56:54,187 WARN [Listener at localhost/46567] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:56:54,191 INFO [Listener at localhost/46567] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:56:54,296 WARN [BP-1041572678-172.31.14.131-1685476525516 heartbeating to localhost/127.0.0.1:43381] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:56:54,296 WARN [BP-1041572678-172.31.14.131-1685476525516 heartbeating to localhost/127.0.0.1:43381] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1041572678-172.31.14.131-1685476525516 (Datanode Uuid 7b7a27f3-8739-4c5d-a410-b36edb3706e7) service to localhost/127.0.0.1:43381 2023-05-30 19:56:54,297 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/dfs/data/data3/current/BP-1041572678-172.31.14.131-1685476525516] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:56:54,298 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/dfs/data/data4/current/BP-1041572678-172.31.14.131-1685476525516] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:56:54,299 WARN [Listener at localhost/46567] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:56:54,301 INFO [Listener at localhost/46567] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:56:54,404 WARN [BP-1041572678-172.31.14.131-1685476525516 heartbeating to localhost/127.0.0.1:43381] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:56:54,404 WARN [BP-1041572678-172.31.14.131-1685476525516 heartbeating to localhost/127.0.0.1:43381] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1041572678-172.31.14.131-1685476525516 (Datanode Uuid 801bad2e-f13a-4493-a396-871c4cdcb881) service to localhost/127.0.0.1:43381 2023-05-30 19:56:54,404 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/dfs/data/data1/current/BP-1041572678-172.31.14.131-1685476525516] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:56:54,405 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/cluster_3060b63b-5009-36be-27aa-032eed850302/dfs/data/data2/current/BP-1041572678-172.31.14.131-1685476525516] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:56:54,439 INFO [Listener at localhost/46567] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:56:54,551 INFO [Listener at localhost/46567] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 19:56:54,585 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 19:56:54,596 INFO [Listener at localhost/46567] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43381 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43381 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43381 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@7189a0c7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46567 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:43381 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43381 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) - Thread LEAK? -, OpenFileDescriptor=439 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=72 (was 219), ProcessCount=168 (was 168), AvailableMemoryMB=3647 (was 4392) 2023-05-30 19:56:54,604 INFO [Listener at localhost/46567] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=439, MaxFileDescriptor=60000, SystemLoadAverage=72, ProcessCount=168, AvailableMemoryMB=3647 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/hadoop.log.dir so I do NOT create it in target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/dd94c3ca-57d0-2eaa-530e-bd3c688459a1/hadoop.tmp.dir so I do NOT create it in target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324, deleteOnExit=true 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 19:56:54,605 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/test.cache.data in system properties and HBase conf 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/hadoop.log.dir in system properties and HBase conf 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 19:56:54,606 DEBUG [Listener at localhost/46567] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 19:56:54,606 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 19:56:54,607 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/nfs.dump.dir in system properties and HBase conf 2023-05-30 19:56:54,608 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir in system properties and HBase conf 2023-05-30 19:56:54,608 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:56:54,608 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 19:56:54,608 INFO [Listener at localhost/46567] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 19:56:54,609 WARN [Listener at localhost/46567] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:56:54,612 WARN [Listener at localhost/46567] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:56:54,612 WARN [Listener at localhost/46567] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:56:54,655 WARN [Listener at localhost/46567] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:56:54,658 INFO [Listener at localhost/46567] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:56:54,662 INFO [Listener at localhost/46567] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_43687_hdfs____.7fuirk/webapp 2023-05-30 19:56:54,763 INFO [Listener at localhost/46567] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43687 2023-05-30 19:56:54,765 WARN [Listener at localhost/46567] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:56:54,768 WARN [Listener at localhost/46567] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:56:54,768 WARN [Listener at localhost/46567] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:56:54,814 WARN [Listener at localhost/43855] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:56:54,824 WARN [Listener at localhost/43855] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:56:54,826 WARN [Listener at localhost/43855] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:56:54,827 INFO [Listener at localhost/43855] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:56:54,831 INFO [Listener at localhost/43855] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_36941_datanode____.fvbxpi/webapp 2023-05-30 19:56:54,894 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:56:54,921 INFO [Listener at localhost/43855] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36941 2023-05-30 19:56:54,928 WARN [Listener at localhost/34745] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:56:54,945 WARN [Listener at localhost/34745] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:56:54,948 WARN [Listener at localhost/34745] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:56:54,949 INFO [Listener at localhost/34745] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:56:54,957 INFO [Listener at localhost/34745] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_40057_datanode____8ed48t/webapp 2023-05-30 19:56:55,048 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbcdd1c99835ce824: Processing first storage report for DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d from datanode 36d1d453-6cf5-4aa9-a417-575ff1a5a77c 2023-05-30 19:56:55,048 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbcdd1c99835ce824: from storage DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d node DatanodeRegistration(127.0.0.1:35199, datanodeUuid=36d1d453-6cf5-4aa9-a417-575ff1a5a77c, infoPort=38249, infoSecurePort=0, ipcPort=34745, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:56:55,049 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbcdd1c99835ce824: Processing first storage report for DS-6dab6602-b016-4789-8515-8893793a7ef0 from datanode 36d1d453-6cf5-4aa9-a417-575ff1a5a77c 2023-05-30 19:56:55,049 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbcdd1c99835ce824: from storage DS-6dab6602-b016-4789-8515-8893793a7ef0 node DatanodeRegistration(127.0.0.1:35199, datanodeUuid=36d1d453-6cf5-4aa9-a417-575ff1a5a77c, infoPort=38249, infoSecurePort=0, ipcPort=34745, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:56:55,066 INFO [Listener at localhost/34745] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40057 2023-05-30 19:56:55,074 WARN [Listener at localhost/42029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:56:55,169 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9bda689a93903b88: Processing first storage report for DS-65d236a7-1962-4084-b537-cf1def2f88d5 from datanode 7edf5cf5-1b20-44bf-abdc-84bc34baf02c 2023-05-30 19:56:55,169 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9bda689a93903b88: from storage DS-65d236a7-1962-4084-b537-cf1def2f88d5 node DatanodeRegistration(127.0.0.1:40687, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34615, infoSecurePort=0, ipcPort=42029, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:56:55,169 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9bda689a93903b88: Processing first storage report for DS-797c29f4-02fc-45d7-991b-f90cfaf39947 from datanode 7edf5cf5-1b20-44bf-abdc-84bc34baf02c 2023-05-30 19:56:55,169 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9bda689a93903b88: from storage DS-797c29f4-02fc-45d7-991b-f90cfaf39947 node DatanodeRegistration(127.0.0.1:40687, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34615, infoSecurePort=0, ipcPort=42029, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:56:55,185 DEBUG [Listener at localhost/42029] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53 2023-05-30 19:56:55,188 INFO [Listener at localhost/42029] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/zookeeper_0, clientPort=62840, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 19:56:55,189 INFO [Listener at localhost/42029] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62840 2023-05-30 19:56:55,189 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,190 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,206 INFO [Listener at localhost/42029] util.FSUtils(471): Created version file at hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a with version=8 2023-05-30 19:56:55,206 INFO [Listener at localhost/42029] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 19:56:55,208 INFO [Listener at localhost/42029] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:56:55,208 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,208 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,208 INFO [Listener at localhost/42029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:56:55,209 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,209 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:56:55,209 INFO [Listener at localhost/42029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:56:55,210 INFO [Listener at localhost/42029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40009 2023-05-30 19:56:55,211 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,212 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,213 INFO [Listener at localhost/42029] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40009 connecting to ZooKeeper ensemble=127.0.0.1:62840 2023-05-30 19:56:55,223 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:400090x0, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:56:55,224 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40009-0x1007daaa7160000 connected 2023-05-30 19:56:55,240 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:56:55,241 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:56:55,241 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:56:55,242 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40009 2023-05-30 19:56:55,242 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40009 2023-05-30 19:56:55,242 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40009 2023-05-30 19:56:55,242 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40009 2023-05-30 19:56:55,243 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40009 2023-05-30 19:56:55,243 INFO [Listener at localhost/42029] master.HMaster(444): hbase.rootdir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a, hbase.cluster.distributed=false 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:56:55,257 INFO [Listener at localhost/42029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:56:55,258 INFO [Listener at localhost/42029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41207 2023-05-30 19:56:55,259 INFO [Listener at localhost/42029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:56:55,260 DEBUG [Listener at localhost/42029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:56:55,260 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,261 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,262 INFO [Listener at localhost/42029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41207 connecting to ZooKeeper ensemble=127.0.0.1:62840 2023-05-30 19:56:55,267 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:412070x0, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:56:55,268 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41207-0x1007daaa7160001 connected 2023-05-30 19:56:55,268 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:56:55,268 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:56:55,269 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:56:55,270 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41207 2023-05-30 19:56:55,272 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41207 2023-05-30 19:56:55,273 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41207 2023-05-30 19:56:55,274 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41207 2023-05-30 19:56:55,274 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41207 2023-05-30 19:56:55,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,277 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:56:55,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,278 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:56:55,278 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:56:55,278 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:56:55,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40009,1685476615207 from backup master directory 2023-05-30 19:56:55,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:56:55,283 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,283 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:56:55,283 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:56:55,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/hbase.id with ID: a24f6eac-da5c-4f23-b0c9-e2616350a305 2023-05-30 19:56:55,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:55,318 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0142a19f to 127.0.0.1:62840 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:56:55,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b22254b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:56:55,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:56:55,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 19:56:55,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:56:55,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store-tmp 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:56:55,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:55,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:56:55,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:56:55,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40009%2C1685476615207, suffix=, logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207, archiveDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/oldWALs, maxLogs=10 2023-05-30 19:56:55,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476615349 2023-05-30 19:56:55,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK], DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]] 2023-05-30 19:56:55,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:56:55,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:55,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,359 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,360 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 19:56:55,361 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 19:56:55,362 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,366 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:56:55,368 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:56:55,368 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=734332, jitterRate=-0.06624950468540192}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:56:55,368 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:56:55,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 19:56:55,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 19:56:55,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 19:56:55,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 19:56:55,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-30 19:56:55,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-30 19:56:55,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 19:56:55,374 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 19:56:55,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 19:56:55,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 19:56:55,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 19:56:55,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 19:56:55,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 19:56:55,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 19:56:55,394 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,395 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 19:56:55,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 19:56:55,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 19:56:55,399 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:56:55,399 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:56:55,400 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,401 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40009,1685476615207, sessionid=0x1007daaa7160000, setting cluster-up flag (Was=false) 2023-05-30 19:56:55,405 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 19:56:55,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,416 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 19:56:55,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:55,424 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.hbase-snapshot/.tmp 2023-05-30 19:56:55,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:56:55,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476645430 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 19:56:55,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 19:56:55,431 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:56:55,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 19:56:55,431 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 19:56:55,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 19:56:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476615432,5,FailOnTimeoutGroup] 2023-05-30 19:56:55,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476615432,5,FailOnTimeoutGroup] 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,433 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:56:55,449 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:56:55,449 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:56:55,450 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a 2023-05-30 19:56:55,459 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:55,461 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:56:55,463 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/info 2023-05-30 19:56:55,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:56:55,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:56:55,465 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:56:55,465 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:56:55,466 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,466 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:56:55,468 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/table 2023-05-30 19:56:55,468 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:56:55,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,470 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740 2023-05-30 19:56:55,470 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740 2023-05-30 19:56:55,473 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:56:55,474 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:56:55,476 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(951): ClusterId : a24f6eac-da5c-4f23-b0c9-e2616350a305 2023-05-30 19:56:55,477 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:56:55,478 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:56:55,478 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=776441, jitterRate=-0.012705191969871521}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:56:55,479 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:56:55,479 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:56:55,479 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:56:55,481 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:56:55,481 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:56:55,481 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 19:56:55,481 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:56:55,481 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 19:56:55,483 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 19:56:55,485 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:56:55,486 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 19:56:55,486 DEBUG [RS:0;jenkins-hbase4:41207] zookeeper.ReadOnlyZKClient(139): Connect 0x2b0dfbe4 to 127.0.0.1:62840 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:56:55,490 DEBUG [RS:0;jenkins-hbase4:41207] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25ca10f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:56:55,490 DEBUG [RS:0;jenkins-hbase4:41207] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21f59094, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:56:55,504 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41207 2023-05-30 19:56:55,504 INFO [RS:0;jenkins-hbase4:41207] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:56:55,504 INFO [RS:0;jenkins-hbase4:41207] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:56:55,504 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:56:55,505 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40009,1685476615207 with isa=jenkins-hbase4.apache.org/172.31.14.131:41207, startcode=1685476615256 2023-05-30 19:56:55,505 DEBUG [RS:0;jenkins-hbase4:41207] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:56:55,508 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36097, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:56:55,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,510 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a 2023-05-30 19:56:55,510 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43855 2023-05-30 19:56:55,510 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:56:55,512 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:56:55,512 DEBUG [RS:0;jenkins-hbase4:41207] zookeeper.ZKUtil(162): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,512 WARN [RS:0;jenkins-hbase4:41207] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:56:55,512 INFO [RS:0;jenkins-hbase4:41207] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:56:55,513 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,513 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41207,1685476615256] 2023-05-30 19:56:55,516 DEBUG [RS:0;jenkins-hbase4:41207] zookeeper.ZKUtil(162): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,517 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:56:55,518 INFO [RS:0;jenkins-hbase4:41207] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:56:55,520 INFO [RS:0;jenkins-hbase4:41207] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:56:55,521 INFO [RS:0;jenkins-hbase4:41207] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:56:55,521 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,522 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:56:55,524 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,524 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,524 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,524 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,525 DEBUG [RS:0;jenkins-hbase4:41207] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:55,527 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,527 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,527 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,539 INFO [RS:0;jenkins-hbase4:41207] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:56:55,539 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41207,1685476615256-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,556 INFO [RS:0;jenkins-hbase4:41207] regionserver.Replication(203): jenkins-hbase4.apache.org,41207,1685476615256 started 2023-05-30 19:56:55,556 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41207,1685476615256, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41207, sessionid=0x1007daaa7160001 2023-05-30 19:56:55,556 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:56:55,556 DEBUG [RS:0;jenkins-hbase4:41207] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,556 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41207,1685476615256' 2023-05-30 19:56:55,556 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41207,1685476615256' 2023-05-30 19:56:55,557 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:56:55,558 DEBUG [RS:0;jenkins-hbase4:41207] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:56:55,558 DEBUG [RS:0;jenkins-hbase4:41207] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:56:55,558 INFO [RS:0;jenkins-hbase4:41207] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:56:55,558 INFO [RS:0;jenkins-hbase4:41207] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:56:55,636 DEBUG [jenkins-hbase4:40009] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 19:56:55,637 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41207,1685476615256, state=OPENING 2023-05-30 19:56:55,639 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 19:56:55,641 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:55,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41207,1685476615256}] 2023-05-30 19:56:55,642 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:56:55,664 INFO [RS:0;jenkins-hbase4:41207] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41207%2C1685476615256, suffix=, logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256, archiveDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs, maxLogs=32 2023-05-30 19:56:55,676 INFO [RS:0;jenkins-hbase4:41207] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476615665 2023-05-30 19:56:55,677 DEBUG [RS:0;jenkins-hbase4:41207] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] 2023-05-30 19:56:55,798 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:55,798 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 19:56:55,801 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52942, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 19:56:55,815 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 19:56:55,815 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:56:55,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta, suffix=.meta, logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256, archiveDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs, maxLogs=32 2023-05-30 19:56:55,837 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476615822.meta 2023-05-30 19:56:55,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] 2023-05-30 19:56:55,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:56:55,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 19:56:55,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 19:56:55,838 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 19:56:55,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 19:56:55,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:55,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 19:56:55,838 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 19:56:55,840 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:56:55,841 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/info 2023-05-30 19:56:55,841 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/info 2023-05-30 19:56:55,842 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:56:55,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:56:55,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:56:55,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:56:55,844 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:56:55,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:56:55,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/table 2023-05-30 19:56:55,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740/table 2023-05-30 19:56:55,847 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:56:55,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:55,849 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740 2023-05-30 19:56:55,850 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/meta/1588230740 2023-05-30 19:56:55,852 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:56:55,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:56:55,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=868186, jitterRate=0.10395601391792297}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:56:55,855 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:56:55,857 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476615798 2023-05-30 19:56:55,860 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 19:56:55,860 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 19:56:55,861 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41207,1685476615256, state=OPEN 2023-05-30 19:56:55,863 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 19:56:55,863 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:56:55,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 19:56:55,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41207,1685476615256 in 221 msec 2023-05-30 19:56:55,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 19:56:55,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 385 msec 2023-05-30 19:56:55,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 444 msec 2023-05-30 19:56:55,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476615871, completionTime=-1 2023-05-30 19:56:55,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 19:56:55,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 19:56:55,873 DEBUG [hconnection-0x46ea4d62-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:56:55,875 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:56:55,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 19:56:55,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476675877 2023-05-30 19:56:55,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685476735877 2023-05-30 19:56:55,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40009,1685476615207-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40009,1685476615207-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40009,1685476615207-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40009, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 19:56:55,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:56:55,885 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 19:56:55,886 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 19:56:55,887 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:56:55,888 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:56:55,890 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:55,891 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560 empty. 2023-05-30 19:56:55,891 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:55,891 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 19:56:55,904 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 19:56:55,905 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0f53eee5aad12330b88521bcb3f01560, NAME => 'hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0f53eee5aad12330b88521bcb3f01560, disabling compactions & flushes 2023-05-30 19:56:55,913 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. after waiting 0 ms 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:55,913 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:55,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0f53eee5aad12330b88521bcb3f01560: 2023-05-30 19:56:55,916 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:56:55,917 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476615917"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476615917"}]},"ts":"1685476615917"} 2023-05-30 19:56:55,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:56:55,921 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:56:55,921 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476615921"}]},"ts":"1685476615921"} 2023-05-30 19:56:55,923 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 19:56:55,929 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0f53eee5aad12330b88521bcb3f01560, ASSIGN}] 2023-05-30 19:56:55,931 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0f53eee5aad12330b88521bcb3f01560, ASSIGN 2023-05-30 19:56:55,932 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0f53eee5aad12330b88521bcb3f01560, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41207,1685476615256; forceNewPlan=false, retain=false 2023-05-30 19:56:56,082 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0f53eee5aad12330b88521bcb3f01560, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,083 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476616082"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476616082"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476616082"}]},"ts":"1685476616082"} 2023-05-30 19:56:56,085 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 0f53eee5aad12330b88521bcb3f01560, server=jenkins-hbase4.apache.org,41207,1685476615256}] 2023-05-30 19:56:56,247 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:56,247 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f53eee5aad12330b88521bcb3f01560, NAME => 'hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:56:56,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:56,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,251 INFO [StoreOpener-0f53eee5aad12330b88521bcb3f01560-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,253 DEBUG [StoreOpener-0f53eee5aad12330b88521bcb3f01560-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560/info 2023-05-30 19:56:56,253 DEBUG [StoreOpener-0f53eee5aad12330b88521bcb3f01560-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560/info 2023-05-30 19:56:56,253 INFO [StoreOpener-0f53eee5aad12330b88521bcb3f01560-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f53eee5aad12330b88521bcb3f01560 columnFamilyName info 2023-05-30 19:56:56,254 INFO [StoreOpener-0f53eee5aad12330b88521bcb3f01560-1] regionserver.HStore(310): Store=0f53eee5aad12330b88521bcb3f01560/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:56,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:56:56,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/hbase/namespace/0f53eee5aad12330b88521bcb3f01560/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:56:56,269 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0f53eee5aad12330b88521bcb3f01560; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865361, jitterRate=0.10036346316337585}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:56:56,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0f53eee5aad12330b88521bcb3f01560: 2023-05-30 19:56:56,271 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560., pid=6, masterSystemTime=1685476616239 2023-05-30 19:56:56,273 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:56,273 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:56:56,274 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0f53eee5aad12330b88521bcb3f01560, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,274 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476616274"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476616274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476616274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476616274"}]},"ts":"1685476616274"} 2023-05-30 19:56:56,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 19:56:56,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 0f53eee5aad12330b88521bcb3f01560, server=jenkins-hbase4.apache.org,41207,1685476615256 in 191 msec 2023-05-30 19:56:56,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 19:56:56,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0f53eee5aad12330b88521bcb3f01560, ASSIGN in 350 msec 2023-05-30 19:56:56,283 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:56:56,283 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476616283"}]},"ts":"1685476616283"} 2023-05-30 19:56:56,285 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 19:56:56,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 19:56:56,288 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:56:56,289 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:56:56,289 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:56,290 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 404 msec 2023-05-30 19:56:56,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 19:56:56,301 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:56:56,305 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-05-30 19:56:56,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 19:56:56,322 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:56:56,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-30 19:56:56,339 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 19:56:56,342 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.059sec 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40009,1685476615207-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 19:56:56,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40009,1685476615207-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 19:56:56,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 19:56:56,376 DEBUG [Listener at localhost/42029] zookeeper.ReadOnlyZKClient(139): Connect 0x1f65dda3 to 127.0.0.1:62840 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:56:56,380 DEBUG [Listener at localhost/42029] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3547c498, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:56:56,382 DEBUG [hconnection-0x5cea3551-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:56:56,384 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52958, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:56:56,386 INFO [Listener at localhost/42029] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:56:56,386 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:56,389 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 19:56:56,389 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:56:56,390 INFO [Listener at localhost/42029] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:56:56,403 INFO [Listener at localhost/42029] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:56:56,405 INFO [Listener at localhost/42029] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33145 2023-05-30 19:56:56,405 INFO [Listener at localhost/42029] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:56:56,406 DEBUG [Listener at localhost/42029] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:56:56,406 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:56,407 INFO [Listener at localhost/42029] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:56:56,408 INFO [Listener at localhost/42029] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33145 connecting to ZooKeeper ensemble=127.0.0.1:62840 2023-05-30 19:56:56,411 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:331450x0, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:56:56,412 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(162): regionserver:331450x0, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:56:56,413 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33145-0x1007daaa7160005 connected 2023-05-30 19:56:56,413 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(162): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-30 19:56:56,414 DEBUG [Listener at localhost/42029] zookeeper.ZKUtil(164): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:56:56,416 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-30 19:56:56,417 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33145 2023-05-30 19:56:56,418 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33145 2023-05-30 19:56:56,419 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-30 19:56:56,419 DEBUG [Listener at localhost/42029] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-30 19:56:56,420 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(951): ClusterId : a24f6eac-da5c-4f23-b0c9-e2616350a305 2023-05-30 19:56:56,421 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:56:56,423 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:56:56,423 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:56:56,425 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:56:56,426 DEBUG [RS:1;jenkins-hbase4:33145] zookeeper.ReadOnlyZKClient(139): Connect 0x29bb838e to 127.0.0.1:62840 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:56:56,429 DEBUG [RS:1;jenkins-hbase4:33145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@277c78c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:56:56,429 DEBUG [RS:1;jenkins-hbase4:33145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2343e8fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:56:56,438 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33145 2023-05-30 19:56:56,438 INFO [RS:1;jenkins-hbase4:33145] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:56:56,438 INFO [RS:1;jenkins-hbase4:33145] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:56:56,438 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:56:56,439 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40009,1685476615207 with isa=jenkins-hbase4.apache.org/172.31.14.131:33145, startcode=1685476616402 2023-05-30 19:56:56,439 DEBUG [RS:1;jenkins-hbase4:33145] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:56:56,441 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43255, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:56:56,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,442 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a 2023-05-30 19:56:56,442 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43855 2023-05-30 19:56:56,442 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:56:56,445 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:56:56,445 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:56:56,445 DEBUG [RS:1;jenkins-hbase4:33145] zookeeper.ZKUtil(162): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,445 WARN [RS:1;jenkins-hbase4:33145] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:56:56,445 INFO [RS:1;jenkins-hbase4:33145] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:56:56,445 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33145,1685476616402] 2023-05-30 19:56:56,445 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,447 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,449 DEBUG [RS:1;jenkins-hbase4:33145] zookeeper.ZKUtil(162): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,450 DEBUG [RS:1;jenkins-hbase4:33145] zookeeper.ZKUtil(162): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,451 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:56:56,451 INFO [RS:1;jenkins-hbase4:33145] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:56:56,454 INFO [RS:1;jenkins-hbase4:33145] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:56:56,455 INFO [RS:1;jenkins-hbase4:33145] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:56:56,455 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,455 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:56:56,456 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,457 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:56:56,458 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,458 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,458 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,458 DEBUG [RS:1;jenkins-hbase4:33145] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:56:56,458 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,459 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,459 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,469 INFO [RS:1;jenkins-hbase4:33145] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:56:56,469 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33145,1685476616402-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:56:56,480 INFO [RS:1;jenkins-hbase4:33145] regionserver.Replication(203): jenkins-hbase4.apache.org,33145,1685476616402 started 2023-05-30 19:56:56,480 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33145,1685476616402, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33145, sessionid=0x1007daaa7160005 2023-05-30 19:56:56,480 INFO [Listener at localhost/42029] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:33145,5,FailOnTimeoutGroup] 2023-05-30 19:56:56,480 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:56:56,480 INFO [Listener at localhost/42029] wal.TestLogRolling(323): Replication=2 2023-05-30 19:56:56,480 DEBUG [RS:1;jenkins-hbase4:33145] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,480 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33145,1685476616402' 2023-05-30 19:56:56,481 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:56:56,481 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:56:56,482 DEBUG [Listener at localhost/42029] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33145,1685476616402' 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:56:56,483 DEBUG [RS:1;jenkins-hbase4:33145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:56:56,484 DEBUG [RS:1;jenkins-hbase4:33145] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:56:56,484 INFO [RS:1;jenkins-hbase4:33145] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:56:56,484 INFO [RS:1;jenkins-hbase4:33145] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:56:56,486 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-30 19:56:56,487 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-30 19:56:56,487 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-30 19:56:56,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:56:56,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-30 19:56:56,491 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:56:56,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-30 19:56:56,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:56:56,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:56:56,494 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,495 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb empty. 2023-05-30 19:56:56,495 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,495 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-30 19:56:56,509 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-30 19:56:56,510 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => da721675840b39a55d251d2700845acb, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/.tmp 2023-05-30 19:56:56,519 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:56,519 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing da721675840b39a55d251d2700845acb, disabling compactions & flushes 2023-05-30 19:56:56,519 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,520 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,520 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. after waiting 0 ms 2023-05-30 19:56:56,520 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,520 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,520 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:56:56,523 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:56:56,524 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685476616524"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476616524"}]},"ts":"1685476616524"} 2023-05-30 19:56:56,526 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:56:56,527 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:56:56,528 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476616527"}]},"ts":"1685476616527"} 2023-05-30 19:56:56,529 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-30 19:56:56,536 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-30 19:56:56,538 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-30 19:56:56,538 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-30 19:56:56,538 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-30 19:56:56,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=da721675840b39a55d251d2700845acb, ASSIGN}] 2023-05-30 19:56:56,540 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=da721675840b39a55d251d2700845acb, ASSIGN 2023-05-30 19:56:56,541 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=da721675840b39a55d251d2700845acb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41207,1685476615256; forceNewPlan=false, retain=false 2023-05-30 19:56:56,587 INFO [RS:1;jenkins-hbase4:33145] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33145%2C1685476616402, suffix=, logDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,33145,1685476616402, archiveDir=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs, maxLogs=32 2023-05-30 19:56:56,599 INFO [RS:1;jenkins-hbase4:33145] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,33145,1685476616402/jenkins-hbase4.apache.org%2C33145%2C1685476616402.1685476616588 2023-05-30 19:56:56,599 DEBUG [RS:1;jenkins-hbase4:33145] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] 2023-05-30 19:56:56,693 INFO [jenkins-hbase4:40009] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-30 19:56:56,694 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=da721675840b39a55d251d2700845acb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,695 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685476616694"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476616694"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476616694"}]},"ts":"1685476616694"} 2023-05-30 19:56:56,697 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure da721675840b39a55d251d2700845acb, server=jenkins-hbase4.apache.org,41207,1685476615256}] 2023-05-30 19:56:56,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da721675840b39a55d251d2700845acb, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:56:56,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:56:56,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,858 INFO [StoreOpener-da721675840b39a55d251d2700845acb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,860 DEBUG [StoreOpener-da721675840b39a55d251d2700845acb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info 2023-05-30 19:56:56,860 DEBUG [StoreOpener-da721675840b39a55d251d2700845acb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info 2023-05-30 19:56:56,860 INFO [StoreOpener-da721675840b39a55d251d2700845acb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da721675840b39a55d251d2700845acb columnFamilyName info 2023-05-30 19:56:56,861 INFO [StoreOpener-da721675840b39a55d251d2700845acb-1] regionserver.HStore(310): Store=da721675840b39a55d251d2700845acb/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:56:56,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da721675840b39a55d251d2700845acb 2023-05-30 19:56:56,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:56:56,869 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da721675840b39a55d251d2700845acb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=878952, jitterRate=0.11764617264270782}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:56:56,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:56:56,870 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb., pid=11, masterSystemTime=1685476616850 2023-05-30 19:56:56,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:56:56,872 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=da721675840b39a55d251d2700845acb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:56:56,873 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685476616872"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476616872"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476616872"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476616872"}]},"ts":"1685476616872"} 2023-05-30 19:56:56,878 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-30 19:56:56,878 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure da721675840b39a55d251d2700845acb, server=jenkins-hbase4.apache.org,41207,1685476615256 in 178 msec 2023-05-30 19:56:56,880 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-30 19:56:56,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=da721675840b39a55d251d2700845acb, ASSIGN in 340 msec 2023-05-30 19:56:56,882 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:56:56,882 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476616882"}]},"ts":"1685476616882"} 2023-05-30 19:56:56,884 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-30 19:56:56,886 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:56:56,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 398 msec 2023-05-30 19:56:59,341 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 19:57:01,518 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-30 19:57:01,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-30 19:57:01,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-30 19:57:06,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:57:06,494 INFO [Listener at localhost/42029] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-30 19:57:06,497 DEBUG [Listener at localhost/42029] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-30 19:57:06,497 DEBUG [Listener at localhost/42029] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:06,510 WARN [Listener at localhost/42029] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:06,513 WARN [Listener at localhost/42029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:06,514 INFO [Listener at localhost/42029] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:06,518 INFO [Listener at localhost/42029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_42357_datanode____.4pzruq/webapp 2023-05-30 19:57:06,609 INFO [Listener at localhost/42029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42357 2023-05-30 19:57:06,617 WARN [Listener at localhost/43147] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:06,636 WARN [Listener at localhost/43147] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:06,638 WARN [Listener at localhost/43147] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:06,640 INFO [Listener at localhost/43147] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:06,644 INFO [Listener at localhost/43147] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_34139_datanode____.ft0jwj/webapp 2023-05-30 19:57:06,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf651657383233e6d: Processing first storage report for DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd from datanode 10655af4-f0b5-4c1a-a02e-9f5c140a154a 2023-05-30 19:57:06,718 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf651657383233e6d: from storage DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd node DatanodeRegistration(127.0.0.1:41643, datanodeUuid=10655af4-f0b5-4c1a-a02e-9f5c140a154a, infoPort=39477, infoSecurePort=0, ipcPort=43147, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,718 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf651657383233e6d: Processing first storage report for DS-d09522dd-f7f7-486e-901b-e85e9ad67051 from datanode 10655af4-f0b5-4c1a-a02e-9f5c140a154a 2023-05-30 19:57:06,718 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf651657383233e6d: from storage DS-d09522dd-f7f7-486e-901b-e85e9ad67051 node DatanodeRegistration(127.0.0.1:41643, datanodeUuid=10655af4-f0b5-4c1a-a02e-9f5c140a154a, infoPort=39477, infoSecurePort=0, ipcPort=43147, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,743 INFO [Listener at localhost/43147] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34139 2023-05-30 19:57:06,751 WARN [Listener at localhost/46573] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:06,770 WARN [Listener at localhost/46573] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:06,772 WARN [Listener at localhost/46573] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:06,773 INFO [Listener at localhost/46573] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:06,777 INFO [Listener at localhost/46573] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_46877_datanode____.1l5bk3/webapp 2023-05-30 19:57:06,850 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x53738bee0c0fef22: Processing first storage report for DS-2eafe365-8083-481f-bf0b-1161558230bd from datanode f4dec89a-3b3e-4a50-bdad-3b157f023e4b 2023-05-30 19:57:06,850 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x53738bee0c0fef22: from storage DS-2eafe365-8083-481f-bf0b-1161558230bd node DatanodeRegistration(127.0.0.1:37353, datanodeUuid=f4dec89a-3b3e-4a50-bdad-3b157f023e4b, infoPort=37797, infoSecurePort=0, ipcPort=46573, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,850 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x53738bee0c0fef22: Processing first storage report for DS-5e979e0e-887f-4e91-b4bb-90087423d47f from datanode f4dec89a-3b3e-4a50-bdad-3b157f023e4b 2023-05-30 19:57:06,850 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x53738bee0c0fef22: from storage DS-5e979e0e-887f-4e91-b4bb-90087423d47f node DatanodeRegistration(127.0.0.1:37353, datanodeUuid=f4dec89a-3b3e-4a50-bdad-3b157f023e4b, infoPort=37797, infoSecurePort=0, ipcPort=46573, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,879 INFO [Listener at localhost/46573] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46877 2023-05-30 19:57:06,888 WARN [Listener at localhost/43843] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:06,994 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xab682ca02ce82b9c: Processing first storage report for DS-703bc821-46b3-40ff-8be8-16bdfa9948ab from datanode 2c61cdd3-f648-43bc-8de4-bd847ae997df 2023-05-30 19:57:06,994 WARN [Listener at localhost/43843] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:06,995 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xab682ca02ce82b9c: from storage DS-703bc821-46b3-40ff-8be8-16bdfa9948ab node DatanodeRegistration(127.0.0.1:41993, datanodeUuid=2c61cdd3-f648-43bc-8de4-bd847ae997df, infoPort=45535, infoSecurePort=0, ipcPort=43843, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,996 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:06,998 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:06,998 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-30 19:57:06,998 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476615349 block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK], DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]) is bad. 2023-05-30 19:57:06,999 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40687]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:06,998 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:06,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xab682ca02ce82b9c: Processing first storage report for DS-54c5433c-017f-433e-b8fc-bb250d3b4c47 from datanode 2c61cdd3-f648-43bc-8de4-bd847ae997df 2023-05-30 19:57:06,999 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,33145,1685476616402/jenkins-hbase4.apache.org%2C33145%2C1685476616402.1685476616588 block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]) is bad. 2023-05-30 19:57:06,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xab682ca02ce82b9c: from storage DS-54c5433c-017f-433e-b8fc-bb250d3b4c47 node DatanodeRegistration(127.0.0.1:41993, datanodeUuid=2c61cdd3-f648-43bc-8de4-bd847ae997df, infoPort=45535, infoSecurePort=0, ipcPort=43843, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 0, hasStaleStorage: false, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-30 19:57:06,998 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476615665 block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]) is bad. 2023-05-30 19:57:07,000 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476615822.meta block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]) is bad. 2023-05-30 19:57:07,006 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1819875969_17 at /127.0.0.1:47370 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47370 dst: /127.0.0.1:35199 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,010 INFO [Listener at localhost/43843] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:07,012 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-17788612_17 at /127.0.0.1:47452 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47452 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35199 remote=/127.0.0.1:47452]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,013 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35199]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,014 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-17788612_17 at /127.0.0.1:50318 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40687:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50318 dst: /127.0.0.1:40687 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,014 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:47400 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47400 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35199 remote=/127.0.0.1:47400]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,017 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35199]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,016 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:47398 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47398 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35199 remote=/127.0.0.1:47398]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,015 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35199]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,018 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:50248 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40687:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50248 dst: /127.0.0.1:40687 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,023 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:50250 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40687:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50250 dst: /127.0.0.1:40687 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,115 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1819875969_17 at /127.0.0.1:50214 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40687:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50214 dst: /127.0.0.1:40687 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,115 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:07,116 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid 7edf5cf5-1b20-44bf-abdc-84bc34baf02c) service to localhost/127.0.0.1:43855 2023-05-30 19:57:07,116 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data3/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:07,117 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data4/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:07,118 WARN [Listener at localhost/43843] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:07,118 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:07,119 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:07,119 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:07,119 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:07,124 INFO [Listener at localhost/43843] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:07,227 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:42312 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42312 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,227 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-17788612_17 at /127.0.0.1:42300 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42300 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,229 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:07,228 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:42320 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42320 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,228 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1819875969_17 at /127.0.0.1:42292 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42292 dst: /127.0.0.1:35199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:07,230 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid 36d1d453-6cf5-4aa9-a417-575ff1a5a77c) service to localhost/127.0.0.1:43855 2023-05-30 19:57:07,232 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data1/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:07,232 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data2/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:07,241 WARN [RS:0;jenkins-hbase4:41207.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:07,242 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41207%2C1685476615256:(num 1685476615665) roll requested 2023-05-30 19:57:07,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41207] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:07,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41207] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:52958 deadline: 1685476637241, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-30 19:57:07,247 WARN [Thread-629] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741839_1019 2023-05-30 19:57:07,250 WARN [Thread-629] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] 2023-05-30 19:57:07,259 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-30 19:57:07,259 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476615665 with entries=4, filesize=983 B; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 2023-05-30 19:57:07,261 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK], DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]] 2023-05-30 19:57:07,261 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476615665 is not closed yet, will try archiving it next time 2023-05-30 19:57:07,261 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:07,262 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476615665; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:19,334 INFO [Listener at localhost/43843] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 2023-05-30 19:57:19,335 WARN [Listener at localhost/43843] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:19,336 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:19,336 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 block BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK], DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK]) is bad. 2023-05-30 19:57:19,341 INFO [Listener at localhost/43843] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:19,342 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:56252 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:41643:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56252 dst: /127.0.0.1:41643 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41643 remote=/127.0.0.1:56252]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:19,343 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41643]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:19,344 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:36938 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741840_1020]] datanode.DataXceiver(323): 127.0.0.1:41993:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36938 dst: /127.0.0.1:41993 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:19,446 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:19,446 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid 2c61cdd3-f648-43bc-8de4-bd847ae997df) service to localhost/127.0.0.1:43855 2023-05-30 19:57:19,447 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data9/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:19,447 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data10/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:19,452 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]] 2023-05-30 19:57:19,453 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]] 2023-05-30 19:57:19,453 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41207%2C1685476615256:(num 1685476627242) roll requested 2023-05-30 19:57:19,462 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 2023-05-30 19:57:19,462 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK], DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]] 2023-05-30 19:57:19,462 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 is not closed yet, will try archiving it next time 2023-05-30 19:57:23,457 WARN [Listener at localhost/43843] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:23,460 WARN [ResponseProcessor for block BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022 java.io.IOException: Bad response ERROR for BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022 from datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-30 19:57:23,460 WARN [DataStreamer for file /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 block BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022] hdfs.DataStreamer(1548): Error Recovery for BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK], DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK]) is bad. 2023-05-30 19:57:23,461 WARN [PacketResponder: BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41643]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,462 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:55230 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55230 dst: /127.0.0.1:37353 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,463 INFO [Listener at localhost/43843] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:23,567 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:45942 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:41643:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45942 dst: /127.0.0.1:41643 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,569 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:23,569 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid 10655af4-f0b5-4c1a-a02e-9f5c140a154a) service to localhost/127.0.0.1:43855 2023-05-30 19:57:23,570 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data5/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:23,571 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data6/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:23,575 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,575 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,575 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41207%2C1685476615256:(num 1685476639453) roll requested 2023-05-30 19:57:23,579 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741842_1024 2023-05-30 19:57:23,579 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] 2023-05-30 19:57:23,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41207] regionserver.HRegion(9158): Flush requested on da721675840b39a55d251d2700845acb 2023-05-30 19:57:23,581 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing da721675840b39a55d251d2700845acb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:57:23,582 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741843_1025 2023-05-30 19:57:23,582 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK] 2023-05-30 19:57:23,586 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33308 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741844_1026]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741844_1026 to mirror 127.0.0.1:41993: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,586 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33308 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741844_1026]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33308 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,587 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741844_1026 2023-05-30 19:57:23,587 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:23,589 WARN [Thread-650] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741845_1027 2023-05-30 19:57:23,589 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741846_1028 2023-05-30 19:57:23,589 WARN [Thread-650] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:23,590 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:23,590 WARN [IPC Server handler 4 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-30 19:57:23,590 WARN [IPC Server handler 4 on default port 43855] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-30 19:57:23,591 WARN [IPC Server handler 4 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-30 19:57:23,592 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33310 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741847_1029 to mirror 127.0.0.1:35199: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,594 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33310 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33310 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,595 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741847_1029 2023-05-30 19:57:23,595 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK] 2023-05-30 19:57:23,602 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33316 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741849_1031]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741849_1031 to mirror 127.0.0.1:41993: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,602 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741849_1031 2023-05-30 19:57:23,602 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33316 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741849_1031]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33316 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,602 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:23,603 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643575 2023-05-30 19:57:23,603 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,603 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 is not closed yet, will try archiving it next time 2023-05-30 19:57:23,606 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33322 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741850_1032]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741850_1032 to mirror 127.0.0.1:40687: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,606 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741850_1032 2023-05-30 19:57:23,606 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33322 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741850_1032]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33322 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,606 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] 2023-05-30 19:57:23,607 WARN [IPC Server handler 4 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-30 19:57:23,607 WARN [IPC Server handler 4 on default port 43855] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-30 19:57:23,607 WARN [IPC Server handler 4 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-30 19:57:23,610 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/37e9eb99bfbb435ba285e0dda97efc3b 2023-05-30 19:57:23,618 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/37e9eb99bfbb435ba285e0dda97efc3b as hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/37e9eb99bfbb435ba285e0dda97efc3b 2023-05-30 19:57:23,624 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/37e9eb99bfbb435ba285e0dda97efc3b, entries=5, sequenceid=12, filesize=10.0 K 2023-05-30 19:57:23,625 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,625 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,625 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=7.35 KB/7531 for da721675840b39a55d251d2700845acb in 44ms, sequenceid=12, compaction requested=false 2023-05-30 19:57:23,625 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41207%2C1685476615256:(num 1685476643575) roll requested 2023-05-30 19:57:23,626 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:57:23,628 WARN [Thread-663] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741852_1034 2023-05-30 19:57:23,629 WARN [Thread-663] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK] 2023-05-30 19:57:23,630 WARN [Thread-663] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741853_1035 2023-05-30 19:57:23,630 WARN [Thread-663] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:23,631 WARN [Thread-663] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741854_1036 2023-05-30 19:57:23,632 WARN [Thread-663] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:23,634 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33352 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741855_1037]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741855_1037 to mirror 127.0.0.1:40687: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,634 WARN [Thread-663] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741855_1037 2023-05-30 19:57:23,634 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33352 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741855_1037]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33352 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,635 WARN [Thread-663] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] 2023-05-30 19:57:23,636 WARN [IPC Server handler 2 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-30 19:57:23,636 WARN [IPC Server handler 2 on default port 43855] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-30 19:57:23,636 WARN [IPC Server handler 2 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-30 19:57:23,640 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643575 with entries=1, filesize=440 B; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643626 2023-05-30 19:57:23,641 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:23,641 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 is not closed yet, will try archiving it next time 2023-05-30 19:57:23,641 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643575 is not closed yet, will try archiving it next time 2023-05-30 19:57:23,641 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 to hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476627242 2023-05-30 19:57:23,643 DEBUG [Close-WAL-Writer-1] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 is not closed yet, will try archiving it next time 2023-05-30 19:57:23,797 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-30 19:57:23,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41207] regionserver.HRegion(9158): Flush requested on da721675840b39a55d251d2700845acb 2023-05-30 19:57:23,798 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing da721675840b39a55d251d2700845acb 1/1 column families, dataSize=8.40 KB heapSize=9.25 KB 2023-05-30 19:57:23,803 WARN [Thread-668] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741857_1039 2023-05-30 19:57:23,804 WARN [Thread-668] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:23,805 WARN [Thread-668] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741858_1040 2023-05-30 19:57:23,805 WARN [Thread-668] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40687,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK] 2023-05-30 19:57:23,806 WARN [Thread-668] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741859_1041 2023-05-30 19:57:23,807 WARN [Thread-668] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:23,809 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33382 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741860_1042 to mirror 127.0.0.1:35199: java.net.ConnectException: Connection refused 2023-05-30 19:57:23,809 WARN [Thread-668] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741860_1042 2023-05-30 19:57:23,809 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:33382 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33382 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:23,810 WARN [Thread-668] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK] 2023-05-30 19:57:23,810 WARN [IPC Server handler 1 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-30 19:57:23,810 WARN [IPC Server handler 1 on default port 43855] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-30 19:57:23,810 WARN [IPC Server handler 1 on default port 43855] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-30 19:57:24,003 WARN [Listener at localhost/43843] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:24,006 WARN [Listener at localhost/43843] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:24,007 INFO [Listener at localhost/43843] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:24,012 INFO [Listener at localhost/43843] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/java.io.tmpdir/Jetty_localhost_38119_datanode____.wuuvlx/webapp 2023-05-30 19:57:24,103 INFO [Listener at localhost/43843] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38119 2023-05-30 19:57:24,112 WARN [Listener at localhost/44807] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:24,207 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x243483f357716b09: Processing first storage report for DS-65d236a7-1962-4084-b537-cf1def2f88d5 from datanode 7edf5cf5-1b20-44bf-abdc-84bc34baf02c 2023-05-30 19:57:24,208 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x243483f357716b09: from storage DS-65d236a7-1962-4084-b537-cf1def2f88d5 node DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-30 19:57:24,208 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x243483f357716b09: Processing first storage report for DS-797c29f4-02fc-45d7-991b-f90cfaf39947 from datanode 7edf5cf5-1b20-44bf-abdc-84bc34baf02c 2023-05-30 19:57:24,209 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x243483f357716b09: from storage DS-797c29f4-02fc-45d7-991b-f90cfaf39947 node DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:24,214 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.40 KB at sequenceid=23 (bloomFilter=true), to=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/fb4e1ec2507b48eb88bf188d1db07f34 2023-05-30 19:57:24,221 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/fb4e1ec2507b48eb88bf188d1db07f34 as hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/fb4e1ec2507b48eb88bf188d1db07f34 2023-05-30 19:57:24,228 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/fb4e1ec2507b48eb88bf188d1db07f34, entries=7, sequenceid=23, filesize=12.1 K 2023-05-30 19:57:24,229 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.40 KB/8606, heapSize ~9.23 KB/9456, currentSize=0 B/0 for da721675840b39a55d251d2700845acb in 431ms, sequenceid=23, compaction requested=false 2023-05-30 19:57:24,229 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:57:24,229 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=22.1 K, sizeToCheck=16.0 K 2023-05-30 19:57:24,229 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:57:24,229 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/fb4e1ec2507b48eb88bf188d1db07f34 because midkey is the same as first or last row 2023-05-30 19:57:24,851 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@38fa8b7e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:37353, datanodeUuid=f4dec89a-3b3e-4a50-bdad-3b157f023e4b, infoPort=37797, infoSecurePort=0, ipcPort=46573, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741841_1023 to 127.0.0.1:41643 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:24,851 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@66a1f496] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:37353, datanodeUuid=f4dec89a-3b3e-4a50-bdad-3b157f023e4b, infoPort=37797, infoSecurePort=0, ipcPort=46573, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741851_1033 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:25,432 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:25,432 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C40009%2C1685476615207:(num 1685476615349) roll requested 2023-05-30 19:57:25,437 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:25,438 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:25,444 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-30 19:57:25,445 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476615349 with entries=88, filesize=43.72 KB; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476645432 2023-05-30 19:57:25,445 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40999,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK], DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK]] 2023-05-30 19:57:25,445 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476615349 is not closed yet, will try archiving it next time 2023-05-30 19:57:25,445 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:25,445 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207/jenkins-hbase4.apache.org%2C40009%2C1685476615207.1685476615349; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:25,850 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5591e299] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:37353, datanodeUuid=f4dec89a-3b3e-4a50-bdad-3b157f023e4b, infoPort=37797, infoSecurePort=0, ipcPort=46573, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741848_1030 to 127.0.0.1:41643 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:31,208 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2ba42b64] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741837_1013 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:31,208 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@45406bac] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741835_1011 to 127.0.0.1:41643 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:32,207 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@43fc97db] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741831_1007 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:32,207 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6863e65b] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741827_1003 to 127.0.0.1:41643 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:34,208 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@62961f38] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741828_1004 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:37,207 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@740e73c0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741836_1012 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:38,208 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6a1c57d0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40999, datanodeUuid=7edf5cf5-1b20-44bf-abdc-84bc34baf02c, infoPort=34461, infoSecurePort=0, ipcPort=44807, storageInfo=lv=-57;cid=testClusterID;nsid=1942105352;c=1685476614615):Failed to transfer BP-3938265-172.31.14.131-1685476614615:blk_1073741830_1006 to 127.0.0.1:41993 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:42,750 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1819875969_17 at /127.0.0.1:60566 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741863_1045]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741863_1045 to mirror 127.0.0.1:41643: java.net.ConnectException: Connection refused 2023-05-30 19:57:42,750 WARN [Thread-732] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741863_1045 2023-05-30 19:57:42,751 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1819875969_17 at /127.0.0.1:60566 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741863_1045]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60566 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:42,751 WARN [Thread-732] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:42,761 INFO [Listener at localhost/44807] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643626 with entries=3, filesize=1.89 KB; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476662745 2023-05-30 19:57:42,761 DEBUG [Listener at localhost/44807] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK], DatanodeInfoWithStorage[127.0.0.1:40999,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]] 2023-05-30 19:57:42,761 DEBUG [Listener at localhost/44807] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643626 is not closed yet, will try archiving it next time 2023-05-30 19:57:42,762 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 to hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476639453 2023-05-30 19:57:42,766 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643575 to hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643575 2023-05-30 19:57:42,767 INFO [sync.4] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-30 19:57:42,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41207] regionserver.HRegion(9158): Flush requested on da721675840b39a55d251d2700845acb 2023-05-30 19:57:42,775 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing da721675840b39a55d251d2700845acb 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:57:42,779 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 19:57:42,779 INFO [Listener at localhost/44807] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-30 19:57:42,780 DEBUG [Listener at localhost/44807] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f65dda3 to 127.0.0.1:62840 2023-05-30 19:57:42,780 DEBUG [Listener at localhost/44807] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:42,780 DEBUG [Listener at localhost/44807] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 19:57:42,780 DEBUG [Listener at localhost/44807] util.JVMClusterUtil(257): Found active master hash=290661415, stopped=false 2023-05-30 19:57:42,780 INFO [Listener at localhost/44807] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:57:42,780 WARN [Thread-740] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741865_1047 2023-05-30 19:57:42,782 WARN [Thread-740] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:42,782 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:57:42,782 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:57:42,782 INFO [Listener at localhost/44807] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 19:57:42,782 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:57:42,782 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:42,783 DEBUG [Listener at localhost/44807] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0142a19f to 127.0.0.1:62840 2023-05-30 19:57:42,784 DEBUG [Listener at localhost/44807] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:42,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:57:42,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:57:42,784 INFO [Listener at localhost/44807] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41207,1685476615256' ***** 2023-05-30 19:57:42,784 INFO [Listener at localhost/44807] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 19:57:42,784 INFO [Listener at localhost/44807] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33145,1685476616402' ***** 2023-05-30 19:57:42,784 INFO [Listener at localhost/44807] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 19:57:42,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:57:42,784 INFO [RS:1;jenkins-hbase4:33145] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 19:57:42,784 INFO [RS:1;jenkins-hbase4:33145] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 19:57:42,784 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 19:57:42,785 INFO [RS:1;jenkins-hbase4:33145] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 19:57:42,785 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:57:42,785 INFO [RS:0;jenkins-hbase4:41207] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 19:57:42,785 DEBUG [RS:1;jenkins-hbase4:33145] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29bb838e to 127.0.0.1:62840 2023-05-30 19:57:42,785 DEBUG [RS:1;jenkins-hbase4:33145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:42,785 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33145,1685476616402; all regions closed. 2023-05-30 19:57:42,787 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:57:42,789 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,790 ERROR [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... 2023-05-30 19:57:42,790 DEBUG [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,790 DEBUG [RS:1;jenkins-hbase4:33145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:42,790 INFO [RS:1;jenkins-hbase4:33145] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:57:42,794 INFO [RS:1;jenkins-hbase4:33145] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-30 19:57:42,795 INFO [RS:1;jenkins-hbase4:33145] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 19:57:42,795 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:57:42,795 INFO [RS:1;jenkins-hbase4:33145] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 19:57:42,796 INFO [RS:1;jenkins-hbase4:33145] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 19:57:42,796 INFO [RS:1;jenkins-hbase4:33145] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33145 2023-05-30 19:57:42,800 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:57:42,800 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33145,1685476616402 2023-05-30 19:57:42,800 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:57:42,800 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:57:42,800 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:57:42,801 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33145,1685476616402] 2023-05-30 19:57:42,801 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33145,1685476616402; numProcessing=1 2023-05-30 19:57:42,801 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=33 (bloomFilter=true), to=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/906cfa0042f74f5db37ec990c5b28e42 2023-05-30 19:57:42,803 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33145,1685476616402 already deleted, retry=false 2023-05-30 19:57:42,803 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33145,1685476616402 expired; onlineServers=1 2023-05-30 19:57:42,813 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/906cfa0042f74f5db37ec990c5b28e42 as hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/906cfa0042f74f5db37ec990c5b28e42 2023-05-30 19:57:42,820 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/906cfa0042f74f5db37ec990c5b28e42, entries=7, sequenceid=33, filesize=12.1 K 2023-05-30 19:57:42,821 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=3.15 KB/3228 for da721675840b39a55d251d2700845acb in 46ms, sequenceid=33, compaction requested=true 2023-05-30 19:57:42,821 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:57:42,821 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=34.2 K, sizeToCheck=16.0 K 2023-05-30 19:57:42,821 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 19:57:42,821 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/906cfa0042f74f5db37ec990c5b28e42 because midkey is the same as first or last row 2023-05-30 19:57:42,821 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 19:57:42,821 INFO [RS:0;jenkins-hbase4:41207] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 19:57:42,821 INFO [RS:0;jenkins-hbase4:41207] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 19:57:42,821 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(3303): Received CLOSE for da721675840b39a55d251d2700845acb 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(3303): Received CLOSE for 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:57:42,822 DEBUG [RS:0;jenkins-hbase4:41207] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2b0dfbe4 to 127.0.0.1:62840 2023-05-30 19:57:42,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da721675840b39a55d251d2700845acb, disabling compactions & flushes 2023-05-30 19:57:42,822 DEBUG [RS:0;jenkins-hbase4:41207] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:42,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 19:57:42,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 19:57:42,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. after waiting 0 ms 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 19:57:42,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 19:57:42,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing da721675840b39a55d251d2700845acb 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-30 19:57:42,822 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-30 19:57:42,822 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, da721675840b39a55d251d2700845acb=TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb., 0f53eee5aad12330b88521bcb3f01560=hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560.} 2023-05-30 19:57:42,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:57:42,823 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1504): Waiting on 0f53eee5aad12330b88521bcb3f01560, 1588230740, da721675840b39a55d251d2700845acb 2023-05-30 19:57:42,823 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:57:42,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:57:42,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:57:42,823 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:57:42,824 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-05-30 19:57:42,824 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,824 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta:.meta(num 1685476615822) roll requested 2023-05-30 19:57:42,824 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:57:42,825 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,41207,1685476615256: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,826 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-30 19:57:42,829 WARN [Thread-749] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741868_1050 2023-05-30 19:57:42,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:60594 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741867_1049]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current]'}, localName='127.0.0.1:37353', datanodeUuid='f4dec89a-3b3e-4a50-bdad-3b157f023e4b', xmitsInProgress=0}:Exception transfering block BP-3938265-172.31.14.131-1685476614615:blk_1073741867_1049 to mirror 127.0.0.1:41643: java.net.ConnectException: Connection refused 2023-05-30 19:57:42,829 WARN [Thread-748] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741867_1049 2023-05-30 19:57:42,829 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_227956538_17 at /127.0.0.1:60594 [Receiving block BP-3938265-172.31.14.131-1685476614615:blk_1073741867_1049]] datanode.DataXceiver(323): 127.0.0.1:37353:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60594 dst: /127.0.0.1:37353 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:42,830 WARN [Thread-749] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41993,DS-703bc821-46b3-40ff-8be8-16bdfa9948ab,DISK] 2023-05-30 19:57:42,830 WARN [Thread-748] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:42,830 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-30 19:57:42,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-30 19:57:42,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-30 19:57:42,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-30 19:57:42,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1134034944, "init": 513802240, "max": 2051014656, "used": 673944648 }, "NonHeapMemoryUsage": { "committed": 134045696, "init": 2555904, "max": -1, "used": 131392632 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-30 19:57:42,840 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-30 19:57:42,840 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476615822.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476662824.meta 2023-05-30 19:57:42,841 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37353,DS-2eafe365-8083-481f-bf0b-1161558230bd,DISK], DatanodeInfoWithStorage[127.0.0.1:40999,DS-65d236a7-1962-4084-b537-cf1def2f88d5,DISK]] 2023-05-30 19:57:42,841 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,841 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476615822.meta is not closed yet, will try archiving it next time 2023-05-30 19:57:42,842 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.meta.1685476615822.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=39 (bloomFilter=true), to=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/3ce58f1b44a5477893806e81fbb828ef 2023-05-30 19:57:42,844 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40009] master.MasterRpcServices(609): jenkins-hbase4.apache.org,41207,1685476615256 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,41207,1685476615256: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35199,DS-aefc053e-c23c-4832-a9ac-836fbdbbfe9d,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:42,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/.tmp/info/3ce58f1b44a5477893806e81fbb828ef as hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/3ce58f1b44a5477893806e81fbb828ef 2023-05-30 19:57:42,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/info/3ce58f1b44a5477893806e81fbb828ef, entries=3, sequenceid=39, filesize=7.9 K 2023-05-30 19:57:42,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for da721675840b39a55d251d2700845acb in 35ms, sequenceid=39, compaction requested=true 2023-05-30 19:57:42,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/data/default/TestLogRolling-testLogRollOnDatanodeDeath/da721675840b39a55d251d2700845acb/recovered.edits/42.seqid, newMaxSeqId=42, maxSeqId=1 2023-05-30 19:57:42,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:42,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da721675840b39a55d251d2700845acb: 2023-05-30 19:57:42,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685476616487.da721675840b39a55d251d2700845acb. 2023-05-30 19:57:42,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f53eee5aad12330b88521bcb3f01560, disabling compactions & flushes 2023-05-30 19:57:42,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:42,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:42,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. after waiting 0 ms 2023-05-30 19:57:42,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:42,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f53eee5aad12330b88521bcb3f01560: 2023-05-30 19:57:42,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:43,023 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 19:57:43,024 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(3303): Received CLOSE for 0f53eee5aad12330b88521bcb3f01560 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f53eee5aad12330b88521bcb3f01560, disabling compactions & flushes 2023-05-30 19:57:43,024 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:57:43,024 DEBUG [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1504): Waiting on 0f53eee5aad12330b88521bcb3f01560, 1588230740 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:57:43,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. after waiting 0 ms 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f53eee5aad12330b88521bcb3f01560: 2023-05-30 19:57:43,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685476615884.0f53eee5aad12330b88521bcb3f01560. 2023-05-30 19:57:43,082 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,082 INFO [RS:1;jenkins-hbase4:33145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33145,1685476616402; zookeeper connection closed. 2023-05-30 19:57:43,082 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:33145-0x1007daaa7160005, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,083 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@67772d6b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@67772d6b 2023-05-30 19:57:43,166 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643626 to hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/oldWALs/jenkins-hbase4.apache.org%2C41207%2C1685476615256.1685476643626 2023-05-30 19:57:43,224 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-30 19:57:43,224 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41207,1685476615256; all regions closed. 2023-05-30 19:57:43,224 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:57:43,230 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/WALs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:57:43,233 DEBUG [RS:0;jenkins-hbase4:41207] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:43,233 INFO [RS:0;jenkins-hbase4:41207] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:57:43,233 INFO [RS:0;jenkins-hbase4:41207] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-30 19:57:43,234 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:57:43,234 INFO [RS:0;jenkins-hbase4:41207] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41207 2023-05-30 19:57:43,236 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:57:43,236 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41207,1685476615256 2023-05-30 19:57:43,238 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41207,1685476615256] 2023-05-30 19:57:43,238 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41207,1685476615256; numProcessing=2 2023-05-30 19:57:43,239 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41207,1685476615256 already deleted, retry=false 2023-05-30 19:57:43,240 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41207,1685476615256 expired; onlineServers=0 2023-05-30 19:57:43,240 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40009,1685476615207' ***** 2023-05-30 19:57:43,240 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 19:57:43,240 DEBUG [M:0;jenkins-hbase4:40009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@156cb2d5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:57:43,240 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:57:43,240 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40009,1685476615207; all regions closed. 2023-05-30 19:57:43,240 DEBUG [M:0;jenkins-hbase4:40009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:57:43,240 DEBUG [M:0;jenkins-hbase4:40009] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 19:57:43,240 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 19:57:43,240 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476615432] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476615432,5,FailOnTimeoutGroup] 2023-05-30 19:57:43,240 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476615432] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476615432,5,FailOnTimeoutGroup] 2023-05-30 19:57:43,240 DEBUG [M:0;jenkins-hbase4:40009] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 19:57:43,242 INFO [M:0;jenkins-hbase4:40009] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 19:57:43,242 INFO [M:0;jenkins-hbase4:40009] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 19:57:43,242 INFO [M:0;jenkins-hbase4:40009] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 19:57:43,242 DEBUG [M:0;jenkins-hbase4:40009] master.HMaster(1512): Stopping service threads 2023-05-30 19:57:43,242 INFO [M:0;jenkins-hbase4:40009] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 19:57:43,243 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 19:57:43,243 ERROR [M:0;jenkins-hbase4:40009] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT 2023-05-30 19:57:43,243 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-30 19:57:43,243 INFO [M:0;jenkins-hbase4:40009] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 19:57:43,243 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 19:57:43,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:57:43,244 DEBUG [M:0;jenkins-hbase4:40009] zookeeper.ZKUtil(398): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 19:57:43,244 WARN [M:0;jenkins-hbase4:40009] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 19:57:43,244 INFO [M:0;jenkins-hbase4:40009] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 19:57:43,244 INFO [M:0;jenkins-hbase4:40009] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 19:57:43,245 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:57:43,245 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:43,245 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:43,245 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:57:43,245 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:43,245 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.09 KB heapSize=45.73 KB 2023-05-30 19:57:43,252 WARN [Thread-766] hdfs.DataStreamer(1658): Abandoning BP-3938265-172.31.14.131-1685476614615:blk_1073741871_1053 2023-05-30 19:57:43,252 WARN [Thread-766] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41643,DS-51d27a82-8250-4ef8-b815-2b27ac5d3cfd,DISK] 2023-05-30 19:57:43,258 INFO [M:0;jenkins-hbase4:40009] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.09 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e41e1913cd3e4423a542087ad7066049 2023-05-30 19:57:43,264 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e41e1913cd3e4423a542087ad7066049 as hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e41e1913cd3e4423a542087ad7066049 2023-05-30 19:57:43,325 INFO [M:0;jenkins-hbase4:40009] regionserver.HStore(1080): Added hdfs://localhost:43855/user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e41e1913cd3e4423a542087ad7066049, entries=11, sequenceid=92, filesize=7.0 K 2023-05-30 19:57:43,326 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegion(2948): Finished flush of dataSize ~38.09 KB/39009, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 81ms, sequenceid=92, compaction requested=false 2023-05-30 19:57:43,327 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:43,327 DEBUG [M:0;jenkins-hbase4:40009] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:57:43,328 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3086fe4a-9bc6-7925-2a53-cb7ccaf2020a/MasterData/WALs/jenkins-hbase4.apache.org,40009,1685476615207 2023-05-30 19:57:43,331 INFO [M:0;jenkins-hbase4:40009] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 19:57:43,331 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:57:43,331 INFO [M:0;jenkins-hbase4:40009] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40009 2023-05-30 19:57:43,333 DEBUG [M:0;jenkins-hbase4:40009] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40009,1685476615207 already deleted, retry=false 2023-05-30 19:57:43,414 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,414 INFO [RS:0;jenkins-hbase4:41207] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41207,1685476615256; zookeeper connection closed. 2023-05-30 19:57:43,415 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): regionserver:41207-0x1007daaa7160001, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,415 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2190074d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2190074d 2023-05-30 19:57:43,416 INFO [Listener at localhost/44807] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-30 19:57:43,515 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,515 INFO [M:0;jenkins-hbase4:40009] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40009,1685476615207; zookeeper connection closed. 2023-05-30 19:57:43,515 DEBUG [Listener at localhost/42029-EventThread] zookeeper.ZKWatcher(600): master:40009-0x1007daaa7160000, quorum=127.0.0.1:62840, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:57:43,516 WARN [Listener at localhost/44807] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:43,520 INFO [Listener at localhost/44807] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:43,530 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:57:43,624 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:43,624 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid 7edf5cf5-1b20-44bf-abdc-84bc34baf02c) service to localhost/127.0.0.1:43855 2023-05-30 19:57:43,625 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data3/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:43,625 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data4/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:43,627 WARN [Listener at localhost/44807] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:43,630 INFO [Listener at localhost/44807] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:43,733 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:43,733 WARN [BP-3938265-172.31.14.131-1685476614615 heartbeating to localhost/127.0.0.1:43855] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-3938265-172.31.14.131-1685476614615 (Datanode Uuid f4dec89a-3b3e-4a50-bdad-3b157f023e4b) service to localhost/127.0.0.1:43855 2023-05-30 19:57:43,734 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data7/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:43,734 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/cluster_405985d5-634a-e1a7-fbba-b65ad12c3324/dfs/data/data8/current/BP-3938265-172.31.14.131-1685476614615] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:43,744 INFO [Listener at localhost/44807] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:43,860 INFO [Listener at localhost/44807] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 19:57:43,899 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 19:57:43,909 INFO [Listener at localhost/44807] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=74 (was 52) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43855 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43855 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:43855 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44807 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:43855 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:43855 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43855 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=461 (was 439) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=61 (was 72), ProcessCount=168 (was 168), AvailableMemoryMB=3129 (was 3647) 2023-05-30 19:57:43,918 INFO [Listener at localhost/44807] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=74, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=61, ProcessCount=168, AvailableMemoryMB=3129 2023-05-30 19:57:43,918 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/hadoop.log.dir so I do NOT create it in target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e52f43b8-9e1a-c8ad-adc7-2c8aac419d53/hadoop.tmp.dir so I do NOT create it in target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea, deleteOnExit=true 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/test.cache.data in system properties and HBase conf 2023-05-30 19:57:43,919 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/hadoop.log.dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 19:57:43,920 DEBUG [Listener at localhost/44807] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:57:43,920 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/nfs.dump.dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:57:43,921 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 19:57:43,922 INFO [Listener at localhost/44807] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 19:57:43,923 WARN [Listener at localhost/44807] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:57:43,926 WARN [Listener at localhost/44807] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:57:43,926 WARN [Listener at localhost/44807] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:57:43,972 WARN [Listener at localhost/44807] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:43,973 INFO [Listener at localhost/44807] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:43,978 INFO [Listener at localhost/44807] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_42271_hdfs____7rya60/webapp 2023-05-30 19:57:44,067 INFO [Listener at localhost/44807] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42271 2023-05-30 19:57:44,069 WARN [Listener at localhost/44807] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:57:44,072 WARN [Listener at localhost/44807] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:57:44,072 WARN [Listener at localhost/44807] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:57:44,117 WARN [Listener at localhost/34399] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:44,128 WARN [Listener at localhost/34399] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:44,130 WARN [Listener at localhost/34399] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:44,131 INFO [Listener at localhost/34399] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:44,135 INFO [Listener at localhost/34399] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_43207_datanode____3ph0ox/webapp 2023-05-30 19:57:44,226 INFO [Listener at localhost/34399] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43207 2023-05-30 19:57:44,232 WARN [Listener at localhost/45811] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:44,254 WARN [Listener at localhost/45811] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:44,256 WARN [Listener at localhost/45811] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:44,257 INFO [Listener at localhost/45811] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:44,261 INFO [Listener at localhost/45811] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_44577_datanode____bj7kjg/webapp 2023-05-30 19:57:44,336 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeff6358037b110fc: Processing first storage report for DS-c53cb32c-1f09-4a56-bf45-db273855fb30 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:57:44,336 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeff6358037b110fc: from storage DS-c53cb32c-1f09-4a56-bf45-db273855fb30 node DatanodeRegistration(127.0.0.1:35017, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=34873, infoSecurePort=0, ipcPort=45811, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:44,336 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeff6358037b110fc: Processing first storage report for DS-6ba38e55-87db-4084-af89-38ba5b843ae7 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:57:44,336 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeff6358037b110fc: from storage DS-6ba38e55-87db-4084-af89-38ba5b843ae7 node DatanodeRegistration(127.0.0.1:35017, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=34873, infoSecurePort=0, ipcPort=45811, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:44,356 INFO [Listener at localhost/45811] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44577 2023-05-30 19:57:44,361 WARN [Listener at localhost/39811] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:44,455 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xace9e89719ae1584: Processing first storage report for DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:57:44,455 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xace9e89719ae1584: from storage DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 node DatanodeRegistration(127.0.0.1:43013, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=40685, infoSecurePort=0, ipcPort=39811, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-30 19:57:44,456 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xace9e89719ae1584: Processing first storage report for DS-b03ab321-9351-487d-b7ab-4f3b8d970627 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:57:44,456 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xace9e89719ae1584: from storage DS-b03ab321-9351-487d-b7ab-4f3b8d970627 node DatanodeRegistration(127.0.0.1:43013, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=40685, infoSecurePort=0, ipcPort=39811, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:44,461 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:57:44,470 DEBUG [Listener at localhost/39811] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb 2023-05-30 19:57:44,472 INFO [Listener at localhost/39811] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/zookeeper_0, clientPort=59903, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 19:57:44,473 INFO [Listener at localhost/39811] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59903 2023-05-30 19:57:44,474 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,474 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,486 INFO [Listener at localhost/39811] util.FSUtils(471): Created version file at hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5 with version=8 2023-05-30 19:57:44,486 INFO [Listener at localhost/39811] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:57:44,488 INFO [Listener at localhost/39811] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:57:44,489 INFO [Listener at localhost/39811] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42889 2023-05-30 19:57:44,490 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,491 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,492 INFO [Listener at localhost/39811] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42889 connecting to ZooKeeper ensemble=127.0.0.1:59903 2023-05-30 19:57:44,499 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:428890x0, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:57:44,499 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42889-0x1007dab679b0000 connected 2023-05-30 19:57:44,514 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:57:44,515 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:57:44,515 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:57:44,515 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42889 2023-05-30 19:57:44,516 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42889 2023-05-30 19:57:44,516 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42889 2023-05-30 19:57:44,516 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42889 2023-05-30 19:57:44,516 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42889 2023-05-30 19:57:44,516 INFO [Listener at localhost/39811] master.HMaster(444): hbase.rootdir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5, hbase.cluster.distributed=false 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:57:44,529 INFO [Listener at localhost/39811] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:57:44,531 INFO [Listener at localhost/39811] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39567 2023-05-30 19:57:44,531 INFO [Listener at localhost/39811] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:57:44,531 DEBUG [Listener at localhost/39811] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:57:44,532 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,533 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,534 INFO [Listener at localhost/39811] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39567 connecting to ZooKeeper ensemble=127.0.0.1:59903 2023-05-30 19:57:44,536 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:395670x0, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:57:44,537 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39567-0x1007dab679b0001 connected 2023-05-30 19:57:44,537 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:57:44,538 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:57:44,538 DEBUG [Listener at localhost/39811] zookeeper.ZKUtil(164): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:57:44,540 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39567 2023-05-30 19:57:44,540 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39567 2023-05-30 19:57:44,540 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39567 2023-05-30 19:57:44,541 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39567 2023-05-30 19:57:44,541 DEBUG [Listener at localhost/39811] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39567 2023-05-30 19:57:44,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,543 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:57:44,544 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,545 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:57:44,545 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:57:44,545 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,546 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:57:44,547 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:57:44,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42889,1685476664487 from backup master directory 2023-05-30 19:57:44,549 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,549 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:57:44,549 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:57:44,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/hbase.id with ID: f923193d-5c10-4cfb-bc8f-704e908de6c3 2023-05-30 19:57:44,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:44,575 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5af5c96e to 127.0.0.1:59903 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:57:44,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71e5cb0c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:57:44,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:57:44,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 19:57:44,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:57:44,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store-tmp 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:57:44,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:44,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:57:44,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:57:44,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42889%2C1685476664487, suffix=, logDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487, archiveDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/oldWALs, maxLogs=10 2023-05-30 19:57:44,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476664603 2023-05-30 19:57:44,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] 2023-05-30 19:57:44,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:57:44,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:44,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,619 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,621 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 19:57:44,621 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 19:57:44,622 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:44,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:57:44,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:57:44,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=824576, jitterRate=0.04850330948829651}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:57:44,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:57:44,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 19:57:44,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 19:57:44,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 19:57:44,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 19:57:44,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-30 19:57:44,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-30 19:57:44,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 19:57:44,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 19:57:44,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 19:57:44,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 19:57:44,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 19:57:44,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 19:57:44,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 19:57:44,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 19:57:44,650 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 19:57:44,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 19:57:44,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 19:57:44,654 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:57:44,654 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:57:44,654 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42889,1685476664487, sessionid=0x1007dab679b0000, setting cluster-up flag (Was=false) 2023-05-30 19:57:44,659 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 19:57:44,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,669 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 19:57:44,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:44,676 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.hbase-snapshot/.tmp 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:57:44,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476694685 2023-05-30 19:57:44,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 19:57:44,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 19:57:44,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 19:57:44,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 19:57:44,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 19:57:44,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 19:57:44,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,690 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:57:44,691 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 19:57:44,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 19:57:44,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 19:57:44,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 19:57:44,693 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:57:44,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 19:57:44,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 19:57:44,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476664699,5,FailOnTimeoutGroup] 2023-05-30 19:57:44,702 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476664699,5,FailOnTimeoutGroup] 2023-05-30 19:57:44,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 19:57:44,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,722 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:57:44,723 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:57:44,723 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5 2023-05-30 19:57:44,744 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:44,745 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(951): ClusterId : f923193d-5c10-4cfb-bc8f-704e908de6c3 2023-05-30 19:57:44,746 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:57:44,748 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:57:44,749 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/info 2023-05-30 19:57:44,750 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:57:44,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:44,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:57:44,752 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:57:44,752 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:57:44,753 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:44,753 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:57:44,755 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:57:44,755 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:57:44,755 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/table 2023-05-30 19:57:44,755 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:57:44,756 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:44,757 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:57:44,757 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740 2023-05-30 19:57:44,758 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740 2023-05-30 19:57:44,758 DEBUG [RS:0;jenkins-hbase4:39567] zookeeper.ReadOnlyZKClient(139): Connect 0x1064eac2 to 127.0.0.1:59903 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:57:44,761 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:57:44,762 DEBUG [RS:0;jenkins-hbase4:39567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@145b6c16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:57:44,763 DEBUG [RS:0;jenkins-hbase4:39567] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5159e0f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:57:44,763 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:57:44,766 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:57:44,767 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=755028, jitterRate=-0.039933010935783386}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:57:44,767 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:57:44,767 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:57:44,767 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:57:44,767 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:57:44,767 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:57:44,767 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:57:44,771 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:57:44,771 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:57:44,772 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:57:44,772 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 19:57:44,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 19:57:44,773 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39567 2023-05-30 19:57:44,773 INFO [RS:0;jenkins-hbase4:39567] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:57:44,773 INFO [RS:0;jenkins-hbase4:39567] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:57:44,773 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:57:44,774 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 19:57:44,775 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,42889,1685476664487 with isa=jenkins-hbase4.apache.org/172.31.14.131:39567, startcode=1685476664529 2023-05-30 19:57:44,775 DEBUG [RS:0;jenkins-hbase4:39567] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:57:44,775 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 19:57:44,778 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60237, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:57:44,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,780 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5 2023-05-30 19:57:44,780 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34399 2023-05-30 19:57:44,780 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:57:44,781 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:57:44,782 DEBUG [RS:0;jenkins-hbase4:39567] zookeeper.ZKUtil(162): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,782 WARN [RS:0;jenkins-hbase4:39567] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:57:44,782 INFO [RS:0;jenkins-hbase4:39567] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:57:44,782 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1946): logDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,782 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39567,1685476664529] 2023-05-30 19:57:44,786 DEBUG [RS:0;jenkins-hbase4:39567] zookeeper.ZKUtil(162): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,787 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:57:44,787 INFO [RS:0;jenkins-hbase4:39567] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:57:44,789 INFO [RS:0;jenkins-hbase4:39567] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:57:44,790 INFO [RS:0;jenkins-hbase4:39567] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:57:44,790 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,791 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:57:44,792 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,793 DEBUG [RS:0;jenkins-hbase4:39567] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:57:44,794 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,794 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,794 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,805 INFO [RS:0;jenkins-hbase4:39567] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:57:44,805 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39567,1685476664529-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:44,816 INFO [RS:0;jenkins-hbase4:39567] regionserver.Replication(203): jenkins-hbase4.apache.org,39567,1685476664529 started 2023-05-30 19:57:44,816 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39567,1685476664529, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39567, sessionid=0x1007dab679b0001 2023-05-30 19:57:44,816 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:57:44,816 DEBUG [RS:0;jenkins-hbase4:39567] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,816 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39567,1685476664529' 2023-05-30 19:57:44,816 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:57:44,816 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39567,1685476664529' 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:57:44,817 DEBUG [RS:0;jenkins-hbase4:39567] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:57:44,817 INFO [RS:0;jenkins-hbase4:39567] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:57:44,817 INFO [RS:0;jenkins-hbase4:39567] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:57:44,919 INFO [RS:0;jenkins-hbase4:39567] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39567%2C1685476664529, suffix=, logDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529, archiveDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/oldWALs, maxLogs=32 2023-05-30 19:57:44,925 DEBUG [jenkins-hbase4:42889] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 19:57:44,926 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39567,1685476664529, state=OPENING 2023-05-30 19:57:44,928 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 19:57:44,930 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:44,930 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39567,1685476664529}] 2023-05-30 19:57:44,930 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:57:44,933 INFO [RS:0;jenkins-hbase4:39567] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 2023-05-30 19:57:44,933 DEBUG [RS:0;jenkins-hbase4:39567] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] 2023-05-30 19:57:45,084 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:45,085 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 19:57:45,087 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54822, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 19:57:45,090 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 19:57:45,091 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:57:45,092 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39567%2C1685476664529.meta, suffix=.meta, logDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529, archiveDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/oldWALs, maxLogs=32 2023-05-30 19:57:45,100 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.meta.1685476665093.meta 2023-05-30 19:57:45,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] 2023-05-30 19:57:45,100 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:57:45,101 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 19:57:45,101 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 19:57:45,101 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 19:57:45,101 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 19:57:45,101 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:45,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 19:57:45,102 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 19:57:45,104 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:57:45,105 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/info 2023-05-30 19:57:45,105 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/info 2023-05-30 19:57:45,105 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:57:45,106 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:45,106 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:57:45,107 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:57:45,107 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:57:45,107 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:57:45,108 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:45,108 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:57:45,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/table 2023-05-30 19:57:45,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740/table 2023-05-30 19:57:45,110 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:57:45,110 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:45,111 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740 2023-05-30 19:57:45,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/meta/1588230740 2023-05-30 19:57:45,115 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:57:45,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:57:45,117 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828729, jitterRate=0.053784072399139404}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:57:45,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:57:45,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476665084 2023-05-30 19:57:45,122 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 19:57:45,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 19:57:45,124 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39567,1685476664529, state=OPEN 2023-05-30 19:57:45,125 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 19:57:45,125 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:57:45,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 19:57:45,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39567,1685476664529 in 195 msec 2023-05-30 19:57:45,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 19:57:45,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 356 msec 2023-05-30 19:57:45,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 454 msec 2023-05-30 19:57:45,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476665132, completionTime=-1 2023-05-30 19:57:45,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 19:57:45,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 19:57:45,135 DEBUG [hconnection-0xd8fe96-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:57:45,136 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:57:45,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 19:57:45,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476725138 2023-05-30 19:57:45,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685476785138 2023-05-30 19:57:45,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42889,1685476664487-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42889,1685476664487-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42889,1685476664487-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42889, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 19:57:45,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 19:57:45,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:57:45,146 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 19:57:45,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 19:57:45,148 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:57:45,149 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:57:45,151 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,151 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59 empty. 2023-05-30 19:57:45,152 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,152 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 19:57:45,162 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 19:57:45,163 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 32fb1ef92bd71752385108bb2110eb59, NAME => 'hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp 2023-05-30 19:57:45,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:45,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 32fb1ef92bd71752385108bb2110eb59, disabling compactions & flushes 2023-05-30 19:57:45,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. after waiting 0 ms 2023-05-30 19:57:45,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,171 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,171 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 32fb1ef92bd71752385108bb2110eb59: 2023-05-30 19:57:45,173 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:57:45,174 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476665173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476665173"}]},"ts":"1685476665173"} 2023-05-30 19:57:45,176 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:57:45,177 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:57:45,177 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476665177"}]},"ts":"1685476665177"} 2023-05-30 19:57:45,178 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 19:57:45,184 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=32fb1ef92bd71752385108bb2110eb59, ASSIGN}] 2023-05-30 19:57:45,186 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=32fb1ef92bd71752385108bb2110eb59, ASSIGN 2023-05-30 19:57:45,186 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=32fb1ef92bd71752385108bb2110eb59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39567,1685476664529; forceNewPlan=false, retain=false 2023-05-30 19:57:45,337 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=32fb1ef92bd71752385108bb2110eb59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:45,338 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476665337"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476665337"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476665337"}]},"ts":"1685476665337"} 2023-05-30 19:57:45,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 32fb1ef92bd71752385108bb2110eb59, server=jenkins-hbase4.apache.org,39567,1685476664529}] 2023-05-30 19:57:45,496 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 32fb1ef92bd71752385108bb2110eb59, NAME => 'hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:57:45,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:45,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,498 INFO [StoreOpener-32fb1ef92bd71752385108bb2110eb59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,499 DEBUG [StoreOpener-32fb1ef92bd71752385108bb2110eb59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59/info 2023-05-30 19:57:45,499 DEBUG [StoreOpener-32fb1ef92bd71752385108bb2110eb59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59/info 2023-05-30 19:57:45,500 INFO [StoreOpener-32fb1ef92bd71752385108bb2110eb59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 32fb1ef92bd71752385108bb2110eb59 columnFamilyName info 2023-05-30 19:57:45,500 INFO [StoreOpener-32fb1ef92bd71752385108bb2110eb59-1] regionserver.HStore(310): Store=32fb1ef92bd71752385108bb2110eb59/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:45,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,504 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:57:45,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/hbase/namespace/32fb1ef92bd71752385108bb2110eb59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:57:45,506 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 32fb1ef92bd71752385108bb2110eb59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=801044, jitterRate=0.018580496311187744}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:57:45,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 32fb1ef92bd71752385108bb2110eb59: 2023-05-30 19:57:45,508 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59., pid=6, masterSystemTime=1685476665492 2023-05-30 19:57:45,510 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,510 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:57:45,511 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=32fb1ef92bd71752385108bb2110eb59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:45,511 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476665511"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476665511"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476665511"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476665511"}]},"ts":"1685476665511"} 2023-05-30 19:57:45,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 19:57:45,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 32fb1ef92bd71752385108bb2110eb59, server=jenkins-hbase4.apache.org,39567,1685476664529 in 173 msec 2023-05-30 19:57:45,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 19:57:45,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=32fb1ef92bd71752385108bb2110eb59, ASSIGN in 331 msec 2023-05-30 19:57:45,521 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:57:45,522 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476665521"}]},"ts":"1685476665521"} 2023-05-30 19:57:45,523 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 19:57:45,526 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:57:45,528 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 380 msec 2023-05-30 19:57:45,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 19:57:45,551 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:57:45,551 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:45,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 19:57:45,564 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:57:45,569 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-30 19:57:45,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 19:57:45,585 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:57:45,589 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-30 19:57:45,602 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 19:57:45,605 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 19:57:45,605 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.056sec 2023-05-30 19:57:45,605 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 19:57:45,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 19:57:45,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 19:57:45,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42889,1685476664487-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 19:57:45,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42889,1685476664487-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 19:57:45,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 19:57:45,643 DEBUG [Listener at localhost/39811] zookeeper.ReadOnlyZKClient(139): Connect 0x7328bd00 to 127.0.0.1:59903 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:57:45,647 DEBUG [Listener at localhost/39811] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@375508e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:57:45,648 DEBUG [hconnection-0x7f111523-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:57:45,651 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54842, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:57:45,652 INFO [Listener at localhost/39811] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:57:45,652 INFO [Listener at localhost/39811] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:57:45,656 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 19:57:45,656 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:57:45,657 INFO [Listener at localhost/39811] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 19:57:45,657 INFO [Listener at localhost/39811] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-30 19:57:45,657 INFO [Listener at localhost/39811] wal.TestLogRolling(432): Replication=2 2023-05-30 19:57:45,659 DEBUG [Listener at localhost/39811] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-30 19:57:45,661 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-30 19:57:45,663 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-30 19:57:45,663 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-30 19:57:45,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:57:45,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-30 19:57:45,667 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:57:45,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-30 19:57:45,668 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:57:45,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:57:45,670 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:45,671 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8 empty. 2023-05-30 19:57:45,672 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:45,672 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-30 19:57:45,684 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-30 19:57:45,686 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => b21b6749750cdd9d4cb4e904894873d8, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/.tmp 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing b21b6749750cdd9d4cb4e904894873d8, disabling compactions & flushes 2023-05-30 19:57:45,694 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. after waiting 0 ms 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:45,694 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:45,694 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for b21b6749750cdd9d4cb4e904894873d8: 2023-05-30 19:57:45,697 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:57:45,698 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685476665698"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476665698"}]},"ts":"1685476665698"} 2023-05-30 19:57:45,700 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:57:45,701 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:57:45,701 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476665701"}]},"ts":"1685476665701"} 2023-05-30 19:57:45,703 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-30 19:57:45,707 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=b21b6749750cdd9d4cb4e904894873d8, ASSIGN}] 2023-05-30 19:57:45,708 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=b21b6749750cdd9d4cb4e904894873d8, ASSIGN 2023-05-30 19:57:45,709 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=b21b6749750cdd9d4cb4e904894873d8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39567,1685476664529; forceNewPlan=false, retain=false 2023-05-30 19:57:45,861 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b21b6749750cdd9d4cb4e904894873d8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:45,861 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685476665861"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476665861"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476665861"}]},"ts":"1685476665861"} 2023-05-30 19:57:45,863 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure b21b6749750cdd9d4cb4e904894873d8, server=jenkins-hbase4.apache.org,39567,1685476664529}] 2023-05-30 19:57:46,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:46,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b21b6749750cdd9d4cb4e904894873d8, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:57:46,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:57:46,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,022 INFO [StoreOpener-b21b6749750cdd9d4cb4e904894873d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,024 DEBUG [StoreOpener-b21b6749750cdd9d4cb4e904894873d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8/info 2023-05-30 19:57:46,024 DEBUG [StoreOpener-b21b6749750cdd9d4cb4e904894873d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8/info 2023-05-30 19:57:46,024 INFO [StoreOpener-b21b6749750cdd9d4cb4e904894873d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b21b6749750cdd9d4cb4e904894873d8 columnFamilyName info 2023-05-30 19:57:46,025 INFO [StoreOpener-b21b6749750cdd9d4cb4e904894873d8-1] regionserver.HStore(310): Store=b21b6749750cdd9d4cb4e904894873d8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:57:46,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:57:46,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/data/default/TestLogRolling-testLogRollOnPipelineRestart/b21b6749750cdd9d4cb4e904894873d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:57:46,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b21b6749750cdd9d4cb4e904894873d8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=700340, jitterRate=-0.10947270691394806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:57:46,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b21b6749750cdd9d4cb4e904894873d8: 2023-05-30 19:57:46,031 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8., pid=11, masterSystemTime=1685476666016 2023-05-30 19:57:46,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:46,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:46,034 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b21b6749750cdd9d4cb4e904894873d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:57:46,034 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685476666034"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476666034"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476666034"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476666034"}]},"ts":"1685476666034"} 2023-05-30 19:57:46,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-30 19:57:46,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure b21b6749750cdd9d4cb4e904894873d8, server=jenkins-hbase4.apache.org,39567,1685476664529 in 173 msec 2023-05-30 19:57:46,040 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-30 19:57:46,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=b21b6749750cdd9d4cb4e904894873d8, ASSIGN in 331 msec 2023-05-30 19:57:46,041 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:57:46,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476666041"}]},"ts":"1685476666041"} 2023-05-30 19:57:46,043 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-30 19:57:46,045 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:57:46,047 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 381 msec 2023-05-30 19:57:48,404 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 19:57:50,787 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-30 19:57:55,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:57:55,670 INFO [Listener at localhost/39811] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-30 19:57:55,673 DEBUG [Listener at localhost/39811] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-30 19:57:55,673 DEBUG [Listener at localhost/39811] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:57:57,679 INFO [Listener at localhost/39811] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 2023-05-30 19:57:57,679 WARN [Listener at localhost/39811] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:57,681 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:57,682 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-30 19:57:57,683 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-30 19:57:57,683 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.meta.1685476665093.meta block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]) is bad. 2023-05-30 19:57:57,683 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]) is bad. 2023-05-30 19:57:57,683 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476664603 block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:43013,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]) is bad. 2023-05-30 19:57:57,683 WARN [PacketResponder: BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43013]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,684 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:42454 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42454 dst: /127.0.0.1:35017 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,685 WARN [PacketResponder: BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43013]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,688 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1165464482_17 at /127.0.0.1:42408 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42408 dst: /127.0.0.1:35017 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,689 INFO [Listener at localhost/39811] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:57,690 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:42444 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42444 dst: /127.0.0.1:35017 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35017 remote=/127.0.0.1:42444]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,691 WARN [PacketResponder: BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35017]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,692 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:52908 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:43013:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52908 dst: /127.0.0.1:43013 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,792 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:52918 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:43013:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52918 dst: /127.0.0.1:43013 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,793 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:57,793 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1165464482_17 at /127.0.0.1:52880 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:43013:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52880 dst: /127.0.0.1:43013 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:57,793 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 79ce9add-6f2c-4319-93ef-41b8122cdd9a) service to localhost/127.0.0.1:34399 2023-05-30 19:57:57,795 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data3/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:57,795 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data4/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:57,801 WARN [Listener at localhost/39811] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:57,804 WARN [Listener at localhost/39811] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:57,805 INFO [Listener at localhost/39811] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:57,809 INFO [Listener at localhost/39811] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_44203_datanode____eo92u4/webapp 2023-05-30 19:57:57,898 INFO [Listener at localhost/39811] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44203 2023-05-30 19:57:57,905 WARN [Listener at localhost/44693] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:57,909 WARN [Listener at localhost/44693] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:57:57,910 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:57,910 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:57,910 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:57:57,915 INFO [Listener at localhost/44693] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:57:57,977 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdae397a20bbc095c: Processing first storage report for DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:57:57,978 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdae397a20bbc095c: from storage DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 node DatanodeRegistration(127.0.0.1:40605, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=43847, infoSecurePort=0, ipcPort=44693, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-30 19:57:57,978 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdae397a20bbc095c: Processing first storage report for DS-b03ab321-9351-487d-b7ab-4f3b8d970627 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:57:57,978 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdae397a20bbc095c: from storage DS-b03ab321-9351-487d-b7ab-4f3b8d970627 node DatanodeRegistration(127.0.0.1:40605, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=43847, infoSecurePort=0, ipcPort=44693, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:58,018 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1165464482_17 at /127.0.0.1:43080 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43080 dst: /127.0.0.1:35017 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:58,019 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:57:58,020 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 5cb4afa3-6dc9-43ef-8cff-67641b02802b) service to localhost/127.0.0.1:34399 2023-05-30 19:57:58,019 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:43090 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43090 dst: /127.0.0.1:35017 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:58,019 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:43068 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:35017:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43068 dst: /127.0.0.1:35017 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:57:58,022 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:58,022 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data2/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:57:58,029 WARN [Listener at localhost/44693] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:57:58,031 WARN [Listener at localhost/44693] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:57:58,032 INFO [Listener at localhost/44693] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:57:58,038 INFO [Listener at localhost/44693] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_38457_datanode____i773y0/webapp 2023-05-30 19:57:58,130 INFO [Listener at localhost/44693] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38457 2023-05-30 19:57:58,136 WARN [Listener at localhost/45095] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:57:58,216 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48d5f6d8277c577e: Processing first storage report for DS-c53cb32c-1f09-4a56-bf45-db273855fb30 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:57:58,216 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48d5f6d8277c577e: from storage DS-c53cb32c-1f09-4a56-bf45-db273855fb30 node DatanodeRegistration(127.0.0.1:38119, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=34123, infoSecurePort=0, ipcPort=45095, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:58,216 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48d5f6d8277c577e: Processing first storage report for DS-6ba38e55-87db-4084-af89-38ba5b843ae7 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:57:58,216 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48d5f6d8277c577e: from storage DS-6ba38e55-87db-4084-af89-38ba5b843ae7 node DatanodeRegistration(127.0.0.1:38119, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=34123, infoSecurePort=0, ipcPort=45095, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:57:59,140 INFO [Listener at localhost/45095] wal.TestLogRolling(481): Data Nodes restarted 2023-05-30 19:57:59,142 INFO [Listener at localhost/45095] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-30 19:57:59,142 WARN [RS:0;jenkins-hbase4:39567.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:59,144 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39567%2C1685476664529:(num 1685476664920) roll requested 2023-05-30 19:57:59,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39567] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:59,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39567] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:54842 deadline: 1685476689142, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-30 19:57:59,152 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 newFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 2023-05-30 19:57:59,152 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-30 19:57:59,152 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 2023-05-30 19:57:59,153 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:38119,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] 2023-05-30 19:57:59,153 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 is not closed yet, will try archiving it next time 2023-05-30 19:57:59,153 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:57:59,153 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:11,200 INFO [Listener at localhost/45095] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-30 19:58:13,202 WARN [Listener at localhost/45095] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:58:13,203 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:38119,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-30 19:58:13,204 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK], DatanodeInfoWithStorage[127.0.0.1:38119,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:38119,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]) is bad. 2023-05-30 19:58:13,204 WARN [PacketResponder: BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:38119]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:13,204 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:58782 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58782 dst: /127.0.0.1:40605 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:13,208 INFO [Listener at localhost/45095] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:58:13,215 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 5cb4afa3-6dc9-43ef-8cff-67641b02802b) service to localhost/127.0.0.1:34399 2023-05-30 19:58:13,215 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:13,216 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data2/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:13,311 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:56186 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38119:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56186 dst: /127.0.0.1:38119 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:13,318 WARN [Listener at localhost/45095] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:58:13,321 WARN [Listener at localhost/45095] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:58:13,322 INFO [Listener at localhost/45095] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:58:13,327 INFO [Listener at localhost/45095] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_37479_datanode____7sl5l/webapp 2023-05-30 19:58:13,419 INFO [Listener at localhost/45095] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37479 2023-05-30 19:58:13,428 WARN [Listener at localhost/46469] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:58:13,432 WARN [Listener at localhost/46469] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:58:13,432 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:58:13,436 INFO [Listener at localhost/46469] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:58:13,495 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfcbdae1de25961a4: Processing first storage report for DS-c53cb32c-1f09-4a56-bf45-db273855fb30 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:58:13,495 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfcbdae1de25961a4: from storage DS-c53cb32c-1f09-4a56-bf45-db273855fb30 node DatanodeRegistration(127.0.0.1:35057, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=33637, infoSecurePort=0, ipcPort=46469, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:13,495 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfcbdae1de25961a4: Processing first storage report for DS-6ba38e55-87db-4084-af89-38ba5b843ae7 from datanode 5cb4afa3-6dc9-43ef-8cff-67641b02802b 2023-05-30 19:58:13,495 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfcbdae1de25961a4: from storage DS-6ba38e55-87db-4084-af89-38ba5b843ae7 node DatanodeRegistration(127.0.0.1:35057, datanodeUuid=5cb4afa3-6dc9-43ef-8cff-67641b02802b, infoPort=33637, infoSecurePort=0, ipcPort=46469, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:13,539 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-788239568_17 at /127.0.0.1:45456 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:40605:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45456 dst: /127.0.0.1:40605 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:13,540 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:58:13,540 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 79ce9add-6f2c-4319-93ef-41b8122cdd9a) service to localhost/127.0.0.1:34399 2023-05-30 19:58:13,541 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data3/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:13,541 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data4/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:13,547 WARN [Listener at localhost/46469] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:58:13,549 WARN [Listener at localhost/46469] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:58:13,550 INFO [Listener at localhost/46469] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:58:13,555 INFO [Listener at localhost/46469] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/java.io.tmpdir/Jetty_localhost_33737_datanode____9vfjs2/webapp 2023-05-30 19:58:13,646 INFO [Listener at localhost/46469] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33737 2023-05-30 19:58:13,656 WARN [Listener at localhost/43261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:58:13,722 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8170804f48494aa: Processing first storage report for DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:58:13,722 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8170804f48494aa: from storage DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3 node DatanodeRegistration(127.0.0.1:45227, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=38137, infoSecurePort=0, ipcPort=43261, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:13,722 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8170804f48494aa: Processing first storage report for DS-b03ab321-9351-487d-b7ab-4f3b8d970627 from datanode 79ce9add-6f2c-4319-93ef-41b8122cdd9a 2023-05-30 19:58:13,722 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8170804f48494aa: from storage DS-b03ab321-9351-487d-b7ab-4f3b8d970627 node DatanodeRegistration(127.0.0.1:45227, datanodeUuid=79ce9add-6f2c-4319-93ef-41b8122cdd9a, infoPort=38137, infoSecurePort=0, ipcPort=43261, storageInfo=lv=-57;cid=testClusterID;nsid=403654600;c=1685476663929), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:14,660 INFO [Listener at localhost/43261] wal.TestLogRolling(498): Data Nodes restarted 2023-05-30 19:58:14,661 INFO [Listener at localhost/43261] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-30 19:58:14,662 WARN [RS:0;jenkins-hbase4:39567.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,663 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39567%2C1685476664529:(num 1685476679144) roll requested 2023-05-30 19:58:14,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39567] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39567] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:54842 deadline: 1685476704662, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-30 19:58:14,671 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 newFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 2023-05-30 19:58:14,671 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-30 19:58:14,671 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 2023-05-30 19:58:14,671 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35057,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:45227,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] 2023-05-30 19:58:14,671 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,671 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 is not closed yet, will try archiving it next time 2023-05-30 19:58:14,672 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40605,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,685 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,686 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C42889%2C1685476664487:(num 1685476664603) roll requested 2023-05-30 19:58:14,686 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,686 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,693 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-30 19:58:14,693 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476664603 with entries=88, filesize=43.79 KB; new WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476694686 2023-05-30 19:58:14,694 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35057,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:45227,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] 2023-05-30 19:58:14,694 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476664603 is not closed yet, will try archiving it next time 2023-05-30 19:58:14,694 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:14,694 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487/jenkins-hbase4.apache.org%2C42889%2C1685476664487.1685476664603; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:26,764 DEBUG [Listener at localhost/43261] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 newFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 2023-05-30 19:58:26,766 INFO [Listener at localhost/43261] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 2023-05-30 19:58:26,769 DEBUG [Listener at localhost/43261] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35057,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:45227,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]] 2023-05-30 19:58:26,770 DEBUG [Listener at localhost/43261] wal.AbstractFSWAL(716): hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 is not closed yet, will try archiving it next time 2023-05-30 19:58:26,770 DEBUG [Listener at localhost/43261] wal.TestLogRolling(512): recovering lease for hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 2023-05-30 19:58:26,771 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 2023-05-30 19:58:26,773 WARN [IPC Server handler 3 on default port 34399] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-05-30 19:58:26,776 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 after 5ms 2023-05-30 19:58:27,745 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@2c323cb1] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1713610059-172.31.14.131-1685476663929:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:45227,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data4/current/BP-1713610059-172.31.14.131-1685476663929/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:30,776 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 after 4005ms 2023-05-30 19:58:30,777 DEBUG [Listener at localhost/43261] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476664920 2023-05-30 19:58:30,785 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685476665506/Put/vlen=175/seqid=0] 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #4: [default/info:d/1685476665560/Put/vlen=9/seqid=0] 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #5: [hbase/info:d/1685476665582/Put/vlen=7/seqid=0] 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685476666031/Put/vlen=231/seqid=0] 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #4: [row1002/info:/1685476675677/Put/vlen=1045/seqid=0] 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-30 19:58:30,786 DEBUG [Listener at localhost/43261] wal.TestLogRolling(512): recovering lease for hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 2023-05-30 19:58:30,786 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 2023-05-30 19:58:30,787 WARN [IPC Server handler 2 on default port 34399] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-30 19:58:30,787 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 after 1ms 2023-05-30 19:58:31,726 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@7e7a05b5] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1713610059-172.31.14.131-1685476663929:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:35057,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current/BP-1713610059-172.31.14.131-1685476663929/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current/BP-1713610059-172.31.14.131-1685476663929/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-30 19:58:34,788 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 after 4002ms 2023-05-30 19:58:34,788 DEBUG [Listener at localhost/43261] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476679144 2023-05-30 19:58:34,792 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #6: [row1003/info:/1685476689195/Put/vlen=1045/seqid=0] 2023-05-30 19:58:34,792 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #7: [row1004/info:/1685476691200/Put/vlen=1045/seqid=0] 2023-05-30 19:58:34,792 DEBUG [Listener at localhost/43261] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-30 19:58:34,792 DEBUG [Listener at localhost/43261] wal.TestLogRolling(512): recovering lease for hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 2023-05-30 19:58:34,792 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 2023-05-30 19:58:34,793 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 after 1ms 2023-05-30 19:58:34,793 DEBUG [Listener at localhost/43261] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476694663 2023-05-30 19:58:34,796 DEBUG [Listener at localhost/43261] wal.TestLogRolling(522): #9: [row1005/info:/1685476704753/Put/vlen=1045/seqid=0] 2023-05-30 19:58:34,796 DEBUG [Listener at localhost/43261] wal.TestLogRolling(512): recovering lease for hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 2023-05-30 19:58:34,796 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 2023-05-30 19:58:34,796 WARN [IPC Server handler 0 on default port 34399] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-30 19:58:34,796 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 after 0ms 2023-05-30 19:58:35,725 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1165464482_17 at /127.0.0.1:35940 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:35057:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35940 dst: /127.0.0.1:35057 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:35057 remote=/127.0.0.1:35940]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:35,726 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1165464482_17 at /127.0.0.1:33764 [Receiving block BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:45227:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33764 dst: /127.0.0.1:45227 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:35,726 WARN [ResponseProcessor for block BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-30 19:58:35,727 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 block BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:35057,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK], DatanodeInfoWithStorage[127.0.0.1:45227,DS-5c846b2d-a8c9-4464-851e-0b6750fd1ad3,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:35057,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]) is bad. 2023-05-30 19:58:35,732 WARN [DataStreamer for file /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 block BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,797 INFO [Listener at localhost/43261] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 after 4001ms 2023-05-30 19:58:38,797 DEBUG [Listener at localhost/43261] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 2023-05-30 19:58:38,801 DEBUG [Listener at localhost/43261] wal.ProtobufLogReader(420): EOF at position 83 2023-05-30 19:58:38,802 INFO [Listener at localhost/43261] regionserver.HRegion(2745): Flushing b21b6749750cdd9d4cb4e904894873d8 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-30 19:58:38,804 WARN [RS:0;jenkins-hbase4:39567.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,804 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39567%2C1685476664529:(num 1685476706755) roll requested 2023-05-30 19:58:38,804 DEBUG [Listener at localhost/43261] regionserver.HRegion(2446): Flush status journal for b21b6749750cdd9d4cb4e904894873d8: 2023-05-30 19:58:38,804 INFO [Listener at localhost/43261] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,806 INFO [Listener at localhost/43261] regionserver.HRegion(2745): Flushing 32fb1ef92bd71752385108bb2110eb59 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-30 19:58:38,806 DEBUG [Listener at localhost/43261] regionserver.HRegion(2446): Flush status journal for 32fb1ef92bd71752385108bb2110eb59: 2023-05-30 19:58:38,806 INFO [Listener at localhost/43261] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,807 INFO [Listener at localhost/43261] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-30 19:58:38,808 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,808 DEBUG [Listener at localhost/43261] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-30 19:58:38,808 INFO [Listener at localhost/43261] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,810 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 19:58:38,810 INFO [Listener at localhost/43261] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-30 19:58:38,811 DEBUG [Listener at localhost/43261] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7328bd00 to 127.0.0.1:59903 2023-05-30 19:58:38,811 DEBUG [Listener at localhost/43261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:58:38,811 DEBUG [Listener at localhost/43261] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 19:58:38,811 DEBUG [Listener at localhost/43261] util.JVMClusterUtil(257): Found active master hash=864552797, stopped=false 2023-05-30 19:58:38,811 INFO [Listener at localhost/43261] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:58:38,814 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 newFile=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476718804 2023-05-30 19:58:38,814 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-30 19:58:38,814 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476718804 2023-05-30 19:58:38,814 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:58:38,814 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,814 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:58:38,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:58:38,815 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755 failed. Cause="Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-30 19:58:38,814 INFO [Listener at localhost/43261] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 19:58:38,815 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,814 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:38,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:58:38,815 DEBUG [Listener at localhost/43261] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5af5c96e to 127.0.0.1:59903 2023-05-30 19:58:38,816 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,816 DEBUG [Listener at localhost/43261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:58:38,816 INFO [Listener at localhost/43261] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,39567,1685476664529' ***** 2023-05-30 19:58:38,816 INFO [Listener at localhost/43261] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 19:58:38,816 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:58:38,817 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-30 19:58:38,818 INFO [RS:0;jenkins-hbase4:39567] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 19:58:38,818 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 19:58:38,818 INFO [RS:0;jenkins-hbase4:39567] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 19:58:38,818 INFO [RS:0;jenkins-hbase4:39567] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 19:58:38,819 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(3303): Received CLOSE for b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:58:38,819 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:58:38,819 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,820 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(3303): Received CLOSE for 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:58:38,820 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:35017,DS-c53cb32c-1f09-4a56-bf45-db273855fb30,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,820 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:58:38,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b21b6749750cdd9d4cb4e904894873d8, disabling compactions & flushes 2023-05-30 19:58:38,820 DEBUG [RS:0;jenkins-hbase4:39567] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1064eac2 to 127.0.0.1:59903 2023-05-30 19:58:38,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:38,820 DEBUG [RS:0;jenkins-hbase4:39567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:58:38,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:38,820 INFO [RS:0;jenkins-hbase4:39567] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 19:58:38,820 INFO [RS:0;jenkins-hbase4:39567] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 19:58:38,820 INFO [RS:0;jenkins-hbase4:39567] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 19:58:38,820 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,39567,1685476664529: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,821 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 19:58:38,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. after waiting 0 ms 2023-05-30 19:58:38,821 ERROR [regionserver/jenkins-hbase4:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:38,821 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-30 19:58:38,821 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-30 19:58:38,821 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1478): Online Regions={b21b6749750cdd9d4cb4e904894873d8=TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8., 32fb1ef92bd71752385108bb2110eb59=hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59., 1588230740=hbase:meta,,1.1588230740} 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:58:38,821 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1504): Waiting on 1588230740, 32fb1ef92bd71752385108bb2110eb59, b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b21b6749750cdd9d4cb4e904894873d8: 2023-05-30 19:58:38,821 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 32fb1ef92bd71752385108bb2110eb59, disabling compactions & flushes 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:58:38,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:58:38,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. after waiting 0 ms 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 32fb1ef92bd71752385108bb2110eb59: 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:38,822 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-30 19:58:38,822 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 3024 in region hbase:meta,,1.1588230740 2023-05-30 19:58:38,822 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 19:58:38,822 DEBUG [regionserver/jenkins-hbase4:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-30 19:58:38,822 INFO [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1161822208, "init": 513802240, "max": 2051014656, "used": 605135848 }, "NonHeapMemoryUsage": { "committed": 139223040, "init": 2555904, "max": -1, "used": 136647872 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-30 19:58:38,822 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:58:38,822 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-30 19:58:38,823 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42889] master.MasterRpcServices(609): jenkins-hbase4.apache.org,39567,1685476664529 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,39567,1685476664529: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/WALs/jenkins-hbase4.apache.org,39567,1685476664529/jenkins-hbase4.apache.org%2C39567%2C1685476664529.1685476706755, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1713610059-172.31.14.131-1685476663929:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-30 19:58:38,823 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C39567%2C1685476664529.meta:.meta(num 1685476665093) roll requested 2023-05-30 19:58:38,823 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-30 19:58:39,021 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(3303): Received CLOSE for b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:58:39,021 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(3303): Received CLOSE for 32fb1ef92bd71752385108bb2110eb59 2023-05-30 19:58:39,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b21b6749750cdd9d4cb4e904894873d8, disabling compactions & flushes 2023-05-30 19:58:39,022 DEBUG [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1504): Waiting on 32fb1ef92bd71752385108bb2110eb59, b21b6749750cdd9d4cb4e904894873d8 2023-05-30 19:58:39,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. after waiting 0 ms 2023-05-30 19:58:39,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,022 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b21b6749750cdd9d4cb4e904894873d8: 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685476665663.b21b6749750cdd9d4cb4e904894873d8. 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 32fb1ef92bd71752385108bb2110eb59, disabling compactions & flushes 2023-05-30 19:58:39,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. after waiting 0 ms 2023-05-30 19:58:39,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,023 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 32fb1ef92bd71752385108bb2110eb59: 2023-05-30 19:58:39,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685476665145.32fb1ef92bd71752385108bb2110eb59. 2023-05-30 19:58:39,222 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39567,1685476664529; all regions closed. 2023-05-30 19:58:39,222 DEBUG [RS:0;jenkins-hbase4:39567] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:58:39,222 INFO [RS:0;jenkins-hbase4:39567] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:58:39,222 INFO [RS:0;jenkins-hbase4:39567] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-30 19:58:39,222 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:58:39,223 INFO [RS:0;jenkins-hbase4:39567] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39567 2023-05-30 19:58:39,226 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39567,1685476664529 2023-05-30 19:58:39,226 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:58:39,227 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:58:39,228 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39567,1685476664529] 2023-05-30 19:58:39,228 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39567,1685476664529; numProcessing=1 2023-05-30 19:58:39,230 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39567,1685476664529 already deleted, retry=false 2023-05-30 19:58:39,230 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39567,1685476664529 expired; onlineServers=0 2023-05-30 19:58:39,230 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,42889,1685476664487' ***** 2023-05-30 19:58:39,230 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 19:58:39,230 DEBUG [M:0;jenkins-hbase4:42889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37b795f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:58:39,230 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:58:39,230 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42889,1685476664487; all regions closed. 2023-05-30 19:58:39,230 DEBUG [M:0;jenkins-hbase4:42889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:58:39,230 DEBUG [M:0;jenkins-hbase4:42889] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 19:58:39,230 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 19:58:39,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476664699] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476664699,5,FailOnTimeoutGroup] 2023-05-30 19:58:39,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476664699] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476664699,5,FailOnTimeoutGroup] 2023-05-30 19:58:39,230 DEBUG [M:0;jenkins-hbase4:42889] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 19:58:39,232 INFO [M:0;jenkins-hbase4:42889] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 19:58:39,232 INFO [M:0;jenkins-hbase4:42889] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 19:58:39,232 INFO [M:0;jenkins-hbase4:42889] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 19:58:39,232 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 19:58:39,232 DEBUG [M:0;jenkins-hbase4:42889] master.HMaster(1512): Stopping service threads 2023-05-30 19:58:39,232 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:39,232 INFO [M:0;jenkins-hbase4:42889] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 19:58:39,232 ERROR [M:0;jenkins-hbase4:42889] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-30 19:58:39,232 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:58:39,232 INFO [M:0;jenkins-hbase4:42889] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 19:58:39,233 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 19:58:39,233 DEBUG [M:0;jenkins-hbase4:42889] zookeeper.ZKUtil(398): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 19:58:39,233 WARN [M:0;jenkins-hbase4:42889] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 19:58:39,233 INFO [M:0;jenkins-hbase4:42889] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 19:58:39,233 INFO [M:0;jenkins-hbase4:42889] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 19:58:39,234 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:58:39,234 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:39,234 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:39,234 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:58:39,234 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:39,234 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.16 KB heapSize=45.78 KB 2023-05-30 19:58:39,247 INFO [M:0;jenkins-hbase4:42889] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.16 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4e03ed908d124b03b64d1300eb36020c 2023-05-30 19:58:39,252 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4e03ed908d124b03b64d1300eb36020c as hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4e03ed908d124b03b64d1300eb36020c 2023-05-30 19:58:39,257 INFO [M:0;jenkins-hbase4:42889] regionserver.HStore(1080): Added hdfs://localhost:34399/user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4e03ed908d124b03b64d1300eb36020c, entries=11, sequenceid=92, filesize=7.0 K 2023-05-30 19:58:39,258 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegion(2948): Finished flush of dataSize ~38.16 KB/39075, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=92, compaction requested=false 2023-05-30 19:58:39,259 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:39,259 DEBUG [M:0;jenkins-hbase4:42889] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:58:39,259 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/ce00bfe6-31b7-5514-b74a-ce3004b64cb5/MasterData/WALs/jenkins-hbase4.apache.org,42889,1685476664487 2023-05-30 19:58:39,262 INFO [M:0;jenkins-hbase4:42889] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 19:58:39,262 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:58:39,263 INFO [M:0;jenkins-hbase4:42889] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42889 2023-05-30 19:58:39,265 DEBUG [M:0;jenkins-hbase4:42889] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42889,1685476664487 already deleted, retry=false 2023-05-30 19:58:39,328 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:58:39,328 INFO [RS:0;jenkins-hbase4:39567] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39567,1685476664529; zookeeper connection closed. 2023-05-30 19:58:39,328 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): regionserver:39567-0x1007dab679b0001, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:58:39,328 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@717b00b4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@717b00b4 2023-05-30 19:58:39,331 INFO [Listener at localhost/43261] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-30 19:58:39,428 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:58:39,428 INFO [M:0;jenkins-hbase4:42889] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42889,1685476664487; zookeeper connection closed. 2023-05-30 19:58:39,428 DEBUG [Listener at localhost/39811-EventThread] zookeeper.ZKWatcher(600): master:42889-0x1007dab679b0000, quorum=127.0.0.1:59903, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:58:39,429 WARN [Listener at localhost/43261] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:58:39,433 INFO [Listener at localhost/43261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:58:39,537 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:58:39,537 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 79ce9add-6f2c-4319-93ef-41b8122cdd9a) service to localhost/127.0.0.1:34399 2023-05-30 19:58:39,538 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data3/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:39,538 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data4/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:39,540 WARN [Listener at localhost/43261] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:58:39,543 INFO [Listener at localhost/43261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:58:39,646 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:58:39,646 WARN [BP-1713610059-172.31.14.131-1685476663929 heartbeating to localhost/127.0.0.1:34399] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1713610059-172.31.14.131-1685476663929 (Datanode Uuid 5cb4afa3-6dc9-43ef-8cff-67641b02802b) service to localhost/127.0.0.1:34399 2023-05-30 19:58:39,647 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data1/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:39,647 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/cluster_9eaaad24-b94d-1c31-2463-a27beb9b9cea/dfs/data/data2/current/BP-1713610059-172.31.14.131-1685476663929] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:58:39,658 INFO [Listener at localhost/43261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:58:39,769 INFO [Listener at localhost/43261] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 19:58:39,781 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 19:58:39,791 INFO [Listener at localhost/43261] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=86 (was 74) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:34399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:34399 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:34399 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/43261 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34399 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (889566633) connection to localhost/127.0.0.1:34399 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) - Thread LEAK? -, OpenFileDescriptor=460 (was 461), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=43 (was 61), ProcessCount=168 (was 168), AvailableMemoryMB=2917 (was 3129) 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=86, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=43, ProcessCount=168, AvailableMemoryMB=2917 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/hadoop.log.dir so I do NOT create it in target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1c33e37d-03a0-3198-e98f-3abd8ab20ccb/hadoop.tmp.dir so I do NOT create it in target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c, deleteOnExit=true 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 19:58:39,799 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/test.cache.data in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/hadoop.log.dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 19:58:39,800 DEBUG [Listener at localhost/43261] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 19:58:39,800 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/nfs.dump.dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/java.io.tmpdir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 19:58:39,801 INFO [Listener at localhost/43261] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 19:58:39,803 WARN [Listener at localhost/43261] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:58:39,807 WARN [Listener at localhost/43261] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:58:39,807 WARN [Listener at localhost/43261] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:58:39,851 WARN [Listener at localhost/43261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:58:39,853 INFO [Listener at localhost/43261] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:58:39,857 INFO [Listener at localhost/43261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/java.io.tmpdir/Jetty_localhost_40689_hdfs____.2likyj/webapp 2023-05-30 19:58:39,947 INFO [Listener at localhost/43261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40689 2023-05-30 19:58:39,948 WARN [Listener at localhost/43261] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:58:39,952 WARN [Listener at localhost/43261] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:58:39,952 WARN [Listener at localhost/43261] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:58:39,990 WARN [Listener at localhost/40309] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:58:40,000 WARN [Listener at localhost/40309] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:58:40,002 WARN [Listener at localhost/40309] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:58:40,004 INFO [Listener at localhost/40309] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:58:40,008 INFO [Listener at localhost/40309] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/java.io.tmpdir/Jetty_localhost_34609_datanode____.84gp5n/webapp 2023-05-30 19:58:40,103 INFO [Listener at localhost/40309] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34609 2023-05-30 19:58:40,109 WARN [Listener at localhost/40707] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:58:40,123 WARN [Listener at localhost/40707] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:58:40,125 WARN [Listener at localhost/40707] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:58:40,126 INFO [Listener at localhost/40707] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:58:40,129 INFO [Listener at localhost/40707] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/java.io.tmpdir/Jetty_localhost_44975_datanode____.l3ezb6/webapp 2023-05-30 19:58:40,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8d949bd184f2969b: Processing first storage report for DS-334e1331-21d2-42d4-ac85-c46f01d3ff86 from datanode 53ee8fee-ae27-49ef-95a5-37bd8cc8a039 2023-05-30 19:58:40,203 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8d949bd184f2969b: from storage DS-334e1331-21d2-42d4-ac85-c46f01d3ff86 node DatanodeRegistration(127.0.0.1:38945, datanodeUuid=53ee8fee-ae27-49ef-95a5-37bd8cc8a039, infoPort=44531, infoSecurePort=0, ipcPort=40707, storageInfo=lv=-57;cid=testClusterID;nsid=991065090;c=1685476719810), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:40,203 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8d949bd184f2969b: Processing first storage report for DS-0ab4c3d7-355b-4c7e-a32c-d8643db374ad from datanode 53ee8fee-ae27-49ef-95a5-37bd8cc8a039 2023-05-30 19:58:40,203 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8d949bd184f2969b: from storage DS-0ab4c3d7-355b-4c7e-a32c-d8643db374ad node DatanodeRegistration(127.0.0.1:38945, datanodeUuid=53ee8fee-ae27-49ef-95a5-37bd8cc8a039, infoPort=44531, infoSecurePort=0, ipcPort=40707, storageInfo=lv=-57;cid=testClusterID;nsid=991065090;c=1685476719810), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:40,231 INFO [Listener at localhost/40707] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44975 2023-05-30 19:58:40,238 WARN [Listener at localhost/44453] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:58:40,327 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1129590a19e9162a: Processing first storage report for DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc from datanode 71a3fb9c-d51e-4792-bbd7-f0b6665ed611 2023-05-30 19:58:40,327 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1129590a19e9162a: from storage DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc node DatanodeRegistration(127.0.0.1:46327, datanodeUuid=71a3fb9c-d51e-4792-bbd7-f0b6665ed611, infoPort=40271, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=991065090;c=1685476719810), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:40,327 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1129590a19e9162a: Processing first storage report for DS-974245cb-15f4-410d-9694-a1943e4d46fe from datanode 71a3fb9c-d51e-4792-bbd7-f0b6665ed611 2023-05-30 19:58:40,327 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1129590a19e9162a: from storage DS-974245cb-15f4-410d-9694-a1943e4d46fe node DatanodeRegistration(127.0.0.1:46327, datanodeUuid=71a3fb9c-d51e-4792-bbd7-f0b6665ed611, infoPort=40271, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=991065090;c=1685476719810), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:58:40,345 DEBUG [Listener at localhost/44453] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2 2023-05-30 19:58:40,348 INFO [Listener at localhost/44453] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/zookeeper_0, clientPort=61525, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 19:58:40,348 INFO [Listener at localhost/44453] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61525 2023-05-30 19:58:40,349 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,349 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,362 INFO [Listener at localhost/44453] util.FSUtils(471): Created version file at hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197 with version=8 2023-05-30 19:58:40,362 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:58:40,364 INFO [Listener at localhost/44453] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:58:40,366 INFO [Listener at localhost/44453] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44079 2023-05-30 19:58:40,366 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,367 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,367 INFO [Listener at localhost/44453] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44079 connecting to ZooKeeper ensemble=127.0.0.1:61525 2023-05-30 19:58:40,374 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:440790x0, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:58:40,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44079-0x1007dac41de0000 connected 2023-05-30 19:58:40,388 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:58:40,388 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:58:40,389 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:58:40,389 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44079 2023-05-30 19:58:40,389 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44079 2023-05-30 19:58:40,390 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44079 2023-05-30 19:58:40,391 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44079 2023-05-30 19:58:40,391 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44079 2023-05-30 19:58:40,391 INFO [Listener at localhost/44453] master.HMaster(444): hbase.rootdir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197, hbase.cluster.distributed=false 2023-05-30 19:58:40,403 INFO [Listener at localhost/44453] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:58:40,404 INFO [Listener at localhost/44453] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:58:40,405 INFO [Listener at localhost/44453] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33129 2023-05-30 19:58:40,405 INFO [Listener at localhost/44453] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:58:40,406 DEBUG [Listener at localhost/44453] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:58:40,407 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,407 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,408 INFO [Listener at localhost/44453] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33129 connecting to ZooKeeper ensemble=127.0.0.1:61525 2023-05-30 19:58:40,412 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:331290x0, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:58:40,413 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33129-0x1007dac41de0001 connected 2023-05-30 19:58:40,413 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:58:40,414 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:58:40,415 DEBUG [Listener at localhost/44453] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:58:40,416 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33129 2023-05-30 19:58:40,416 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33129 2023-05-30 19:58:40,417 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33129 2023-05-30 19:58:40,417 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33129 2023-05-30 19:58:40,417 DEBUG [Listener at localhost/44453] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33129 2023-05-30 19:58:40,418 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,420 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:58:40,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,421 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:58:40,421 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:58:40,421 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:58:40,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44079,1685476720363 from backup master directory 2023-05-30 19:58:40,423 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:58:40,424 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,424 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:58:40,424 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:58:40,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/hbase.id with ID: 75e6b816-e847-4af0-b0af-f7575a3e6c48 2023-05-30 19:58:40,446 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:40,449 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x731b38a3 to 127.0.0.1:61525 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:58:40,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@aaa39b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:58:40,464 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:58:40,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 19:58:40,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:58:40,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store-tmp 2023-05-30 19:58:40,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:40,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:58:40,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:40,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:40,475 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:58:40,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:40,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:58:40,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:58:40,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/WALs/jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44079%2C1685476720363, suffix=, logDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/WALs/jenkins-hbase4.apache.org,44079,1685476720363, archiveDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/oldWALs, maxLogs=10 2023-05-30 19:58:40,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/WALs/jenkins-hbase4.apache.org,44079,1685476720363/jenkins-hbase4.apache.org%2C44079%2C1685476720363.1685476720479 2023-05-30 19:58:40,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK], DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK]] 2023-05-30 19:58:40,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:58:40,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:40,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,487 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,488 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 19:58:40,488 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 19:58:40,489 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:58:40,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:58:40,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828697, jitterRate=0.05374322831630707}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:58:40,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:58:40,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 19:58:40,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 19:58:40,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 19:58:40,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 19:58:40,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-30 19:58:40,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-30 19:58:40,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 19:58:40,503 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 19:58:40,504 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 19:58:40,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 19:58:40,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 19:58:40,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 19:58:40,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 19:58:40,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 19:58:40,517 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,517 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 19:58:40,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 19:58:40,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 19:58:40,519 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:58:40,520 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:58:40,520 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44079,1685476720363, sessionid=0x1007dac41de0000, setting cluster-up flag (Was=false) 2023-05-30 19:58:40,525 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,529 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 19:58:40,530 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,532 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 19:58:40,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:40,538 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.hbase-snapshot/.tmp 2023-05-30 19:58:40,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 19:58:40,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:58:40,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:58:40,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476750542 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 19:58:40,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 19:58:40,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 19:58:40,545 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:58:40,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 19:58:40,545 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 19:58:40,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 19:58:40,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 19:58:40,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 19:58:40,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476720545,5,FailOnTimeoutGroup] 2023-05-30 19:58:40,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476720545,5,FailOnTimeoutGroup] 2023-05-30 19:58:40,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 19:58:40,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,546 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:58:40,557 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:58:40,558 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:58:40,558 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197 2023-05-30 19:58:40,565 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:40,567 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:58:40,568 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/info 2023-05-30 19:58:40,569 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:58:40,569 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,569 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:58:40,571 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:58:40,571 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:58:40,572 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,572 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:58:40,573 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/table 2023-05-30 19:58:40,574 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:58:40,574 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,575 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740 2023-05-30 19:58:40,575 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740 2023-05-30 19:58:40,577 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:58:40,578 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:58:40,580 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:58:40,580 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=746256, jitterRate=-0.05108697712421417}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:58:40,581 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:58:40,581 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:58:40,581 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:58:40,582 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:58:40,582 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 19:58:40,582 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 19:58:40,584 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 19:58:40,585 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 19:58:40,619 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(951): ClusterId : 75e6b816-e847-4af0-b0af-f7575a3e6c48 2023-05-30 19:58:40,620 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:58:40,622 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:58:40,622 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:58:40,624 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:58:40,625 DEBUG [RS:0;jenkins-hbase4:33129] zookeeper.ReadOnlyZKClient(139): Connect 0x3fe1478a to 127.0.0.1:61525 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:58:40,630 DEBUG [RS:0;jenkins-hbase4:33129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3126be06, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:58:40,630 DEBUG [RS:0;jenkins-hbase4:33129] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35192cfa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:58:40,639 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33129 2023-05-30 19:58:40,639 INFO [RS:0;jenkins-hbase4:33129] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:58:40,639 INFO [RS:0;jenkins-hbase4:33129] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:58:40,639 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:58:40,639 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,44079,1685476720363 with isa=jenkins-hbase4.apache.org/172.31.14.131:33129, startcode=1685476720403 2023-05-30 19:58:40,640 DEBUG [RS:0;jenkins-hbase4:33129] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:58:40,642 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56501, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:58:40,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,644 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197 2023-05-30 19:58:40,644 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40309 2023-05-30 19:58:40,644 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:58:40,645 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:58:40,646 DEBUG [RS:0;jenkins-hbase4:33129] zookeeper.ZKUtil(162): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,646 WARN [RS:0;jenkins-hbase4:33129] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:58:40,646 INFO [RS:0;jenkins-hbase4:33129] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:58:40,646 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,646 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33129,1685476720403] 2023-05-30 19:58:40,650 DEBUG [RS:0;jenkins-hbase4:33129] zookeeper.ZKUtil(162): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,651 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:58:40,651 INFO [RS:0;jenkins-hbase4:33129] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:58:40,652 INFO [RS:0;jenkins-hbase4:33129] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:58:40,652 INFO [RS:0;jenkins-hbase4:33129] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:58:40,652 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,653 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:58:40,654 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,654 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:58:40,655 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,655 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,655 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,655 DEBUG [RS:0;jenkins-hbase4:33129] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:58:40,655 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,655 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,656 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,732 INFO [RS:0;jenkins-hbase4:33129] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:58:40,732 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33129,1685476720403-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,735 DEBUG [jenkins-hbase4:44079] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 19:58:40,736 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33129,1685476720403, state=OPENING 2023-05-30 19:58:40,737 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 19:58:40,738 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:40,739 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:58:40,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33129,1685476720403}] 2023-05-30 19:58:40,746 INFO [RS:0;jenkins-hbase4:33129] regionserver.Replication(203): jenkins-hbase4.apache.org,33129,1685476720403 started 2023-05-30 19:58:40,746 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33129,1685476720403, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33129, sessionid=0x1007dac41de0001 2023-05-30 19:58:40,746 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:58:40,746 DEBUG [RS:0;jenkins-hbase4:33129] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,746 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33129,1685476720403' 2023-05-30 19:58:40,746 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:58:40,746 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33129,1685476720403' 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:58:40,747 DEBUG [RS:0;jenkins-hbase4:33129] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:58:40,748 DEBUG [RS:0;jenkins-hbase4:33129] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:58:40,748 INFO [RS:0;jenkins-hbase4:33129] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:58:40,748 INFO [RS:0;jenkins-hbase4:33129] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:58:40,796 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:58:40,850 INFO [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33129%2C1685476720403, suffix=, logDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403, archiveDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs, maxLogs=32 2023-05-30 19:58:40,858 INFO [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476720850 2023-05-30 19:58:40,858 DEBUG [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK], DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK]] 2023-05-30 19:58:40,893 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:40,894 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 19:58:40,895 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46168, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 19:58:40,899 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 19:58:40,899 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:58:40,900 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33129%2C1685476720403.meta, suffix=.meta, logDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403, archiveDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs, maxLogs=32 2023-05-30 19:58:40,907 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.meta.1685476720901.meta 2023-05-30 19:58:40,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK], DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK]] 2023-05-30 19:58:40,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 19:58:40,908 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 19:58:40,908 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 19:58:40,909 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:58:40,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/info 2023-05-30 19:58:40,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/info 2023-05-30 19:58:40,910 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:58:40,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:58:40,912 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:58:40,912 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:58:40,912 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:58:40,912 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,913 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:58:40,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/table 2023-05-30 19:58:40,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/table 2023-05-30 19:58:40,914 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:58:40,914 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:40,915 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740 2023-05-30 19:58:40,916 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740 2023-05-30 19:58:40,918 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:58:40,920 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:58:40,920 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=881938, jitterRate=0.1214422732591629}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:58:40,921 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:58:40,922 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476720893 2023-05-30 19:58:40,926 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 19:58:40,927 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 19:58:40,927 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33129,1685476720403, state=OPEN 2023-05-30 19:58:40,930 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 19:58:40,930 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:58:40,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 19:58:40,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33129,1685476720403 in 191 msec 2023-05-30 19:58:40,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 19:58:40,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 350 msec 2023-05-30 19:58:40,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 396 msec 2023-05-30 19:58:40,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476720936, completionTime=-1 2023-05-30 19:58:40,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 19:58:40,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 19:58:40,939 DEBUG [hconnection-0x3f7389d3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:58:40,943 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:58:40,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 19:58:40,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476780944 2023-05-30 19:58:40,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685476840944 2023-05-30 19:58:40,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44079,1685476720363-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44079,1685476720363-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44079,1685476720363-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44079, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 19:58:40,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:58:40,951 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 19:58:40,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 19:58:40,953 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:58:40,953 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:58:40,956 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:40,956 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b empty. 2023-05-30 19:58:40,957 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:40,957 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 19:58:40,971 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 19:58:40,972 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a1ac9b963e8c787c5be46bf984611c9b, NAME => 'hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp 2023-05-30 19:58:40,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:40,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a1ac9b963e8c787c5be46bf984611c9b, disabling compactions & flushes 2023-05-30 19:58:40,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:40,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:40,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. after waiting 0 ms 2023-05-30 19:58:40,981 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:40,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:40,982 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a1ac9b963e8c787c5be46bf984611c9b: 2023-05-30 19:58:40,984 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:58:40,985 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476720985"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476720985"}]},"ts":"1685476720985"} 2023-05-30 19:58:40,987 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:58:40,988 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:58:40,988 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476720988"}]},"ts":"1685476720988"} 2023-05-30 19:58:40,990 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 19:58:40,997 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a1ac9b963e8c787c5be46bf984611c9b, ASSIGN}] 2023-05-30 19:58:40,999 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a1ac9b963e8c787c5be46bf984611c9b, ASSIGN 2023-05-30 19:58:41,000 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a1ac9b963e8c787c5be46bf984611c9b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33129,1685476720403; forceNewPlan=false, retain=false 2023-05-30 19:58:41,151 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a1ac9b963e8c787c5be46bf984611c9b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:41,151 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476721151"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476721151"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476721151"}]},"ts":"1685476721151"} 2023-05-30 19:58:41,154 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure a1ac9b963e8c787c5be46bf984611c9b, server=jenkins-hbase4.apache.org,33129,1685476720403}] 2023-05-30 19:58:41,310 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:41,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a1ac9b963e8c787c5be46bf984611c9b, NAME => 'hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:58:41,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:41,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,311 INFO [StoreOpener-a1ac9b963e8c787c5be46bf984611c9b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,313 DEBUG [StoreOpener-a1ac9b963e8c787c5be46bf984611c9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/info 2023-05-30 19:58:41,313 DEBUG [StoreOpener-a1ac9b963e8c787c5be46bf984611c9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/info 2023-05-30 19:58:41,313 INFO [StoreOpener-a1ac9b963e8c787c5be46bf984611c9b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a1ac9b963e8c787c5be46bf984611c9b columnFamilyName info 2023-05-30 19:58:41,314 INFO [StoreOpener-a1ac9b963e8c787c5be46bf984611c9b-1] regionserver.HStore(310): Store=a1ac9b963e8c787c5be46bf984611c9b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:41,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,317 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:58:41,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:58:41,319 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a1ac9b963e8c787c5be46bf984611c9b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=743408, jitterRate=-0.05470810830593109}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:58:41,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a1ac9b963e8c787c5be46bf984611c9b: 2023-05-30 19:58:41,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b., pid=6, masterSystemTime=1685476721306 2023-05-30 19:58:41,323 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:41,323 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:41,324 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a1ac9b963e8c787c5be46bf984611c9b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:41,324 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476721324"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476721324"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476721324"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476721324"}]},"ts":"1685476721324"} 2023-05-30 19:58:41,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 19:58:41,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure a1ac9b963e8c787c5be46bf984611c9b, server=jenkins-hbase4.apache.org,33129,1685476720403 in 172 msec 2023-05-30 19:58:41,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 19:58:41,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a1ac9b963e8c787c5be46bf984611c9b, ASSIGN in 332 msec 2023-05-30 19:58:41,332 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:58:41,332 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476721332"}]},"ts":"1685476721332"} 2023-05-30 19:58:41,334 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 19:58:41,336 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:58:41,338 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 386 msec 2023-05-30 19:58:41,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 19:58:41,353 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:58:41,353 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:41,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 19:58:41,367 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:58:41,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-30 19:58:41,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 19:58:41,388 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:58:41,392 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-30 19:58:41,405 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 19:58:41,407 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 19:58:41,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.983sec 2023-05-30 19:58:41,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 19:58:41,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 19:58:41,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 19:58:41,407 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44079,1685476720363-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 19:58:41,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44079,1685476720363-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 19:58:41,409 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 19:58:41,423 DEBUG [Listener at localhost/44453] zookeeper.ReadOnlyZKClient(139): Connect 0x6b9b113b to 127.0.0.1:61525 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:58:41,427 DEBUG [Listener at localhost/44453] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b0ea005, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:58:41,429 DEBUG [hconnection-0x37a763ce-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:58:41,431 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46192, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:58:41,433 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:58:41,433 INFO [Listener at localhost/44453] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:58:41,437 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 19:58:41,438 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:58:41,438 INFO [Listener at localhost/44453] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 19:58:41,440 DEBUG [Listener at localhost/44453] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-30 19:58:41,443 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43460, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-30 19:58:41,444 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-30 19:58:41,444 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-30 19:58:41,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:58:41,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:58:41,447 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:58:41,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-30 19:58:41,448 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:58:41,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:58:41,450 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,451 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09 empty. 2023-05-30 19:58:41,451 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,451 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-30 19:58:41,463 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-30 19:58:41,464 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => a2b7429fbc533acc65ccb6c943c75f09, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/.tmp 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing a2b7429fbc533acc65ccb6c943c75f09, disabling compactions & flushes 2023-05-30 19:58:41,474 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. after waiting 0 ms 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,474 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,474 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:58:41,477 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:58:41,477 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685476721477"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476721477"}]},"ts":"1685476721477"} 2023-05-30 19:58:41,479 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:58:41,480 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:58:41,480 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476721480"}]},"ts":"1685476721480"} 2023-05-30 19:58:41,482 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-30 19:58:41,487 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=a2b7429fbc533acc65ccb6c943c75f09, ASSIGN}] 2023-05-30 19:58:41,489 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=a2b7429fbc533acc65ccb6c943c75f09, ASSIGN 2023-05-30 19:58:41,490 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=a2b7429fbc533acc65ccb6c943c75f09, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33129,1685476720403; forceNewPlan=false, retain=false 2023-05-30 19:58:41,641 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=a2b7429fbc533acc65ccb6c943c75f09, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:41,641 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685476721641"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476721641"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476721641"}]},"ts":"1685476721641"} 2023-05-30 19:58:41,643 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure a2b7429fbc533acc65ccb6c943c75f09, server=jenkins-hbase4.apache.org,33129,1685476720403}] 2023-05-30 19:58:41,800 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a2b7429fbc533acc65ccb6c943c75f09, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:58:41,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:58:41,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,802 INFO [StoreOpener-a2b7429fbc533acc65ccb6c943c75f09-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,803 DEBUG [StoreOpener-a2b7429fbc533acc65ccb6c943c75f09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info 2023-05-30 19:58:41,804 DEBUG [StoreOpener-a2b7429fbc533acc65ccb6c943c75f09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info 2023-05-30 19:58:41,804 INFO [StoreOpener-a2b7429fbc533acc65ccb6c943c75f09-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a2b7429fbc533acc65ccb6c943c75f09 columnFamilyName info 2023-05-30 19:58:41,804 INFO [StoreOpener-a2b7429fbc533acc65ccb6c943c75f09-1] regionserver.HStore(310): Store=a2b7429fbc533acc65ccb6c943c75f09/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:58:41,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,808 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:58:41,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:58:41,810 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a2b7429fbc533acc65ccb6c943c75f09; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=699781, jitterRate=-0.11018335819244385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:58:41,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:58:41,811 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09., pid=11, masterSystemTime=1685476721796 2023-05-30 19:58:41,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:41,814 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=a2b7429fbc533acc65ccb6c943c75f09, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:41,814 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685476721814"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476721814"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476721814"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476721814"}]},"ts":"1685476721814"} 2023-05-30 19:58:41,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-30 19:58:41,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure a2b7429fbc533acc65ccb6c943c75f09, server=jenkins-hbase4.apache.org,33129,1685476720403 in 173 msec 2023-05-30 19:58:41,820 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-30 19:58:41,820 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=a2b7429fbc533acc65ccb6c943c75f09, ASSIGN in 331 msec 2023-05-30 19:58:41,821 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:58:41,821 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476721821"}]},"ts":"1685476721821"} 2023-05-30 19:58:41,823 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-30 19:58:41,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:58:41,827 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 382 msec 2023-05-30 19:58:44,373 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 19:58:46,651 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-30 19:58:46,652 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-30 19:58:46,652 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:58:51,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:58:51,450 INFO [Listener at localhost/44453] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-30 19:58:51,452 DEBUG [Listener at localhost/44453] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:58:51,452 DEBUG [Listener at localhost/44453] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:58:51,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-30 19:58:51,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-30 19:58:51,472 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-30 19:58:51,472 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:58:51,473 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-30 19:58:51,473 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-30 19:58:51,474 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,474 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-30 19:58:51,475 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:58:51,475 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,475 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:58:51,475 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:58:51,475 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,475 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-30 19:58:51,476 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-30 19:58:51,476 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,476 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-30 19:58:51,476 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-30 19:58:51,477 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-30 19:58:51,479 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-30 19:58:51,479 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-30 19:58:51,479 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:58:51,480 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-30 19:58:51,480 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-30 19:58:51,480 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-30 19:58:51,480 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:51,480 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. started... 2023-05-30 19:58:51,481 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing a1ac9b963e8c787c5be46bf984611c9b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-30 19:58:51,491 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/.tmp/info/bc54014d06644a8699f77a8bb919e9d1 2023-05-30 19:58:51,496 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/.tmp/info/bc54014d06644a8699f77a8bb919e9d1 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/info/bc54014d06644a8699f77a8bb919e9d1 2023-05-30 19:58:51,504 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/info/bc54014d06644a8699f77a8bb919e9d1, entries=2, sequenceid=6, filesize=4.8 K 2023-05-30 19:58:51,505 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for a1ac9b963e8c787c5be46bf984611c9b in 24ms, sequenceid=6, compaction requested=false 2023-05-30 19:58:51,505 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for a1ac9b963e8c787c5be46bf984611c9b: 2023-05-30 19:58:51,505 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:58:51,505 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-30 19:58:51,505 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-30 19:58:51,505 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,505 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-30 19:58:51,505 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-30 19:58:51,507 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,507 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:58:51,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:58:51,508 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,508 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-30 19:58:51,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:58:51,508 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:58:51,509 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,509 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,509 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:58:51,510 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-30 19:58:51,510 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-30 19:58:51,510 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@499a9dd7[Count = 0] remaining members to acquire global barrier 2023-05-30 19:58:51,510 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,512 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,512 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,512 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,512 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-30 19:58:51,512 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,512 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-30 19:58:51,512 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-30 19:58:51,512 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,33129,1685476720403' in zk 2023-05-30 19:58:51,515 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-30 19:58:51,515 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,515 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:58:51,515 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,515 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:58:51,515 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:58:51,515 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-30 19:58:51,515 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:58:51,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:58:51,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:58:51,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,517 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,517 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,33129,1685476720403': 2023-05-30 19:58:51,517 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-30 19:58:51,517 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-30 19:58:51,517 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-30 19:58:51,517 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-30 19:58:51,517 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-30 19:58:51,518 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-30 19:58:51,519 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,519 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,520 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:58:51,520 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,520 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,520 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:58:51,520 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:58:51,521 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,521 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,521 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:58:51,521 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,521 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,521 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,522 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:58:51,522 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-30 19:58:51,522 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,528 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,528 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:58:51,528 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-30 19:58:51,528 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:58:51,528 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:58:51,528 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:58:51,528 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-30 19:58:51,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-30 19:58:51,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:58:51,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-30 19:58:51,529 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:58:51,529 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,529 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-30 19:58:51,529 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-30 19:58:51,529 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:58:51,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:58:51,531 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-30 19:58:51,531 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-30 19:59:01,531 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-30 19:59:01,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-30 19:59:01,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-30 19:59:01,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,547 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:01,547 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:01,548 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-30 19:59:01,548 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-30 19:59:01,548 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,549 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,550 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,550 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:01,550 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:01,550 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:01,550 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,550 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,551 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,552 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-30 19:59:01,552 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:01,552 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-30 19:59:01,552 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-30 19:59:01,552 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-30 19:59:01,552 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:01,552 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. started... 2023-05-30 19:59:01,553 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing a2b7429fbc533acc65ccb6c943c75f09 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 19:59:01,566 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/fa89f6f994884f2b910f6c0e3a59f6b4 2023-05-30 19:59:01,572 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/fa89f6f994884f2b910f6c0e3a59f6b4 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4 2023-05-30 19:59:01,581 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4, entries=1, sequenceid=5, filesize=5.8 K 2023-05-30 19:59:01,582 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a2b7429fbc533acc65ccb6c943c75f09 in 29ms, sequenceid=5, compaction requested=false 2023-05-30 19:59:01,582 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:01,582 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:01,582 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-30 19:59:01,582 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-30 19:59:01,582 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,583 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-30 19:59:01,583 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-30 19:59:01,585 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,585 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:01,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:01,586 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,586 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-30 19:59:01,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:01,586 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:01,587 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,587 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,587 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:01,587 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-30 19:59:01,587 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@4da5003c[Count = 0] remaining members to acquire global barrier 2023-05-30 19:59:01,587 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-30 19:59:01,587 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,589 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,589 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,589 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,589 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-30 19:59:01,589 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-30 19:59:01,589 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33129,1685476720403' in zk 2023-05-30 19:59:01,589 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,589 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-30 19:59:01,591 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-30 19:59:01,591 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,591 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:01,591 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,591 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:01,592 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:01,591 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-30 19:59:01,592 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:01,592 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:01,592 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,593 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,593 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:01,593 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,593 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,594 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33129,1685476720403': 2023-05-30 19:59:01,594 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-30 19:59:01,594 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-30 19:59:01,594 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-30 19:59:01,594 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-30 19:59:01,594 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,594 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-30 19:59:01,600 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,600 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,600 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:01,600 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:01,600 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,600 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:01,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:01,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,602 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:01,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,605 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,605 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:01,605 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:01,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:01,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-30 19:59:01,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:01,606 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,606 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,606 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:01,606 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-30 19:59:01,606 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-30 19:59:01,606 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:01,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:11,606 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-30 19:59:11,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-30 19:59:11,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-30 19:59:11,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-30 19:59:11,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,621 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:11,621 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:11,622 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-30 19:59:11,622 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-30 19:59:11,623 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,623 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,625 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,625 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:11,625 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:11,625 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:11,625 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,625 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-30 19:59:11,625 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,626 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,626 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-30 19:59:11,626 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,626 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,626 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-30 19:59:11,626 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,626 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-30 19:59:11,627 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:11,627 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-30 19:59:11,627 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-30 19:59:11,627 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-30 19:59:11,627 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:11,627 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. started... 2023-05-30 19:59:11,627 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing a2b7429fbc533acc65ccb6c943c75f09 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 19:59:11,644 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/da5aab59bb4b47ecac5efd05a83f013a 2023-05-30 19:59:11,652 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/da5aab59bb4b47ecac5efd05a83f013a as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a 2023-05-30 19:59:11,661 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a, entries=1, sequenceid=9, filesize=5.8 K 2023-05-30 19:59:11,662 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a2b7429fbc533acc65ccb6c943c75f09 in 35ms, sequenceid=9, compaction requested=false 2023-05-30 19:59:11,663 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:11,663 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:11,663 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-30 19:59:11,663 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-30 19:59:11,663 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,663 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-30 19:59:11,663 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-30 19:59:11,665 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,665 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,665 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,665 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:11,665 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:11,665 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,666 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-30 19:59:11,666 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:11,666 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:11,666 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,666 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,667 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:11,667 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-30 19:59:11,667 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@8a797ec[Count = 0] remaining members to acquire global barrier 2023-05-30 19:59:11,667 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-30 19:59:11,667 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,668 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,669 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,669 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,669 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-30 19:59:11,669 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-30 19:59:11,669 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,669 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-30 19:59:11,669 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33129,1685476720403' in zk 2023-05-30 19:59:11,672 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-30 19:59:11,672 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,672 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:11,672 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:11,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:11,672 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-30 19:59:11,673 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:11,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:11,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:11,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33129,1685476720403': 2023-05-30 19:59:11,675 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-30 19:59:11,675 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-30 19:59:11,675 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-30 19:59:11,675 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-30 19:59:11,675 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,675 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-30 19:59:11,677 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,677 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:11,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:11,677 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,677 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:11,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:11,678 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:11,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:11,678 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:11,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,679 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:11,680 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,680 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,683 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,683 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:11,683 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,683 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:11,683 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:11,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-30 19:59:11,683 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:11,684 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:11,684 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:11,684 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:11,684 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,684 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-30 19:59:11,684 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:11,685 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:11,685 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-30 19:59:11,685 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:11,685 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,685 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-30 19:59:21,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-30 19:59:21,699 INFO [Listener at localhost/44453] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476720850 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476761689 2023-05-30 19:59:21,700 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK], DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK]] 2023-05-30 19:59:21,700 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(716): hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476720850 is not closed yet, will try archiving it next time 2023-05-30 19:59:21,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-30 19:59:21,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-30 19:59:21,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,708 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:21,708 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:21,708 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-30 19:59:21,708 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-30 19:59:21,709 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,709 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,710 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,710 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:21,710 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:21,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:21,710 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,710 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-30 19:59:21,710 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,711 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-30 19:59:21,711 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,711 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,711 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-30 19:59:21,711 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,711 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-30 19:59:21,711 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:21,712 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-30 19:59:21,712 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-30 19:59:21,712 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-30 19:59:21,712 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:21,712 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. started... 2023-05-30 19:59:21,712 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing a2b7429fbc533acc65ccb6c943c75f09 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 19:59:21,723 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/cec49a2a4dac4d2994090b89bf508bf6 2023-05-30 19:59:21,729 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/cec49a2a4dac4d2994090b89bf508bf6 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6 2023-05-30 19:59:21,734 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6, entries=1, sequenceid=13, filesize=5.8 K 2023-05-30 19:59:21,735 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a2b7429fbc533acc65ccb6c943c75f09 in 23ms, sequenceid=13, compaction requested=true 2023-05-30 19:59:21,735 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:21,735 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:21,735 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-30 19:59:21,735 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-30 19:59:21,735 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,735 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-30 19:59:21,735 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-30 19:59:21,737 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,737 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:21,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:21,737 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,737 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-30 19:59:21,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:21,738 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:21,738 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,738 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,738 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:21,739 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-30 19:59:21,739 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@240c8abf[Count = 0] remaining members to acquire global barrier 2023-05-30 19:59:21,739 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-30 19:59:21,739 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,740 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,740 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,740 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,740 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-30 19:59:21,740 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-30 19:59:21,740 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33129,1685476720403' in zk 2023-05-30 19:59:21,740 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,740 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-30 19:59:21,746 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-30 19:59:21,746 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,746 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:21,746 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,746 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-30 19:59:21,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:21,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:21,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:21,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:21,747 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:21,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,748 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,749 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33129,1685476720403': 2023-05-30 19:59:21,749 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-30 19:59:21,749 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-30 19:59:21,749 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-30 19:59:21,749 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-30 19:59:21,749 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,749 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-30 19:59:21,751 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,751 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,751 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,751 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:21,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:21,751 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:21,752 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:21,752 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:21,752 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,752 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,753 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,755 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,757 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:21,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:21,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:21,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,757 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:21,757 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:21,758 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,758 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,758 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:21,758 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-30 19:59:21,757 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:21,758 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-30 19:59:21,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:21,759 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:21,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:21,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-30 19:59:31,758 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-30 19:59:31,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-30 19:59:31,760 DEBUG [Listener at localhost/44453] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 19:59:31,765 DEBUG [Listener at localhost/44453] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 19:59:31,765 DEBUG [Listener at localhost/44453] regionserver.HStore(1912): a2b7429fbc533acc65ccb6c943c75f09/info is initiating minor compaction (all files) 2023-05-30 19:59:31,765 INFO [Listener at localhost/44453] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:59:31,765 INFO [Listener at localhost/44453] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:31,765 INFO [Listener at localhost/44453] regionserver.HRegion(2259): Starting compaction of a2b7429fbc533acc65ccb6c943c75f09/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:31,766 INFO [Listener at localhost/44453] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6] into tmpdir=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp, totalSize=17.4 K 2023-05-30 19:59:31,766 DEBUG [Listener at localhost/44453] compactions.Compactor(207): Compacting fa89f6f994884f2b910f6c0e3a59f6b4, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685476741540 2023-05-30 19:59:31,767 DEBUG [Listener at localhost/44453] compactions.Compactor(207): Compacting da5aab59bb4b47ecac5efd05a83f013a, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685476751609 2023-05-30 19:59:31,767 DEBUG [Listener at localhost/44453] compactions.Compactor(207): Compacting cec49a2a4dac4d2994090b89bf508bf6, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685476761687 2023-05-30 19:59:31,778 INFO [Listener at localhost/44453] throttle.PressureAwareThroughputController(145): a2b7429fbc533acc65ccb6c943c75f09#info#compaction#20 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 19:59:31,792 DEBUG [Listener at localhost/44453] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/76129a59aeae426197d58fdb712a310e as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/76129a59aeae426197d58fdb712a310e 2023-05-30 19:59:31,799 INFO [Listener at localhost/44453] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a2b7429fbc533acc65ccb6c943c75f09/info of a2b7429fbc533acc65ccb6c943c75f09 into 76129a59aeae426197d58fdb712a310e(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 19:59:31,799 DEBUG [Listener at localhost/44453] regionserver.HRegion(2289): Compaction status journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:31,817 INFO [Listener at localhost/44453] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476761689 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476771801 2023-05-30 19:59:31,817 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK], DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK]] 2023-05-30 19:59:31,817 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(716): hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476761689 is not closed yet, will try archiving it next time 2023-05-30 19:59:31,817 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476720850 to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476720850 2023-05-30 19:59:31,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-30 19:59:31,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-30 19:59:31,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,825 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:31,825 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:31,825 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-30 19:59:31,825 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-30 19:59:31,826 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,826 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,830 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,830 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:31,830 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:31,830 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:31,830 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,830 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-30 19:59:31,830 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,831 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,831 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-30 19:59:31,831 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,831 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,831 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-30 19:59:31,831 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,831 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-30 19:59:31,831 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-30 19:59:31,832 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-30 19:59:31,832 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-30 19:59:31,832 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-30 19:59:31,832 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:31,832 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. started... 2023-05-30 19:59:31,832 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing a2b7429fbc533acc65ccb6c943c75f09 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 19:59:31,842 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/b56ee8c864ed403f89c99a4909c8ad4a 2023-05-30 19:59:31,847 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/b56ee8c864ed403f89c99a4909c8ad4a as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/b56ee8c864ed403f89c99a4909c8ad4a 2023-05-30 19:59:31,852 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/b56ee8c864ed403f89c99a4909c8ad4a, entries=1, sequenceid=18, filesize=5.8 K 2023-05-30 19:59:31,853 INFO [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a2b7429fbc533acc65ccb6c943c75f09 in 21ms, sequenceid=18, compaction requested=false 2023-05-30 19:59:31,853 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:31,853 DEBUG [rs(jenkins-hbase4.apache.org,33129,1685476720403)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:31,853 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-30 19:59:31,853 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-30 19:59:31,853 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,853 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-30 19:59:31,854 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-30 19:59:31,855 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,855 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:31,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:31,856 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,856 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-30 19:59:31,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:31,856 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:31,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,857 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:31,857 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33129,1685476720403' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-30 19:59:31,857 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@3965dfcc[Count = 0] remaining members to acquire global barrier 2023-05-30 19:59:31,857 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-30 19:59:31,857 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,859 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,859 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,860 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-30 19:59:31,860 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-30 19:59:31,860 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33129,1685476720403' in zk 2023-05-30 19:59:31,860 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,860 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-30 19:59:31,861 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-30 19:59:31,861 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,861 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:31,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:31,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:31,861 DEBUG [member: 'jenkins-hbase4.apache.org,33129,1685476720403' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-30 19:59:31,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:31,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:31,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:31,863 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33129,1685476720403': 2023-05-30 19:59:31,864 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33129,1685476720403' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-30 19:59:31,864 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-30 19:59:31,864 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-30 19:59:31,864 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-30 19:59:31,864 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,864 INFO [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-30 19:59:31,866 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,866 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-30 19:59:31,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-30 19:59:31,866 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,866 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:31,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-30 19:59:31,866 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,866 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:31,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:31,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-30 19:59:31,867 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,868 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-30 19:59:31,868 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-30 19:59:31,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:31,871 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-30 19:59:31,871 DEBUG [(jenkins-hbase4.apache.org,44079,1685476720363)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-30 19:59:31,872 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-30 19:59:31,872 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-30 19:59:31,871 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:31,871 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,872 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-30 19:59:31,872 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,872 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-30 19:59:31,873 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-30 19:59:31,873 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:41,872 DEBUG [Listener at localhost/44453] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-30 19:59:41,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44079] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-30 19:59:41,883 INFO [Listener at localhost/44453] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476771801 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476781875 2023-05-30 19:59:41,883 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38945,DS-334e1331-21d2-42d4-ac85-c46f01d3ff86,DISK], DatanodeInfoWithStorage[127.0.0.1:46327,DS-e5192cd5-1adf-4a41-89c2-f96de59b9cbc,DISK]] 2023-05-30 19:59:41,883 DEBUG [Listener at localhost/44453] wal.AbstractFSWAL(716): hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476771801 is not closed yet, will try archiving it next time 2023-05-30 19:59:41,883 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 19:59:41,883 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476761689 to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs/jenkins-hbase4.apache.org%2C33129%2C1685476720403.1685476761689 2023-05-30 19:59:41,883 INFO [Listener at localhost/44453] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-30 19:59:41,883 DEBUG [Listener at localhost/44453] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b9b113b to 127.0.0.1:61525 2023-05-30 19:59:41,884 DEBUG [Listener at localhost/44453] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:59:41,885 DEBUG [Listener at localhost/44453] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 19:59:41,885 DEBUG [Listener at localhost/44453] util.JVMClusterUtil(257): Found active master hash=1838841418, stopped=false 2023-05-30 19:59:41,886 INFO [Listener at localhost/44453] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:59:41,889 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:59:41,889 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 19:59:41,889 INFO [Listener at localhost/44453] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 19:59:41,889 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:41,889 DEBUG [Listener at localhost/44453] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x731b38a3 to 127.0.0.1:61525 2023-05-30 19:59:41,890 DEBUG [Listener at localhost/44453] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:59:41,890 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:59:41,890 INFO [Listener at localhost/44453] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33129,1685476720403' ***** 2023-05-30 19:59:41,890 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:59:41,890 INFO [Listener at localhost/44453] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 19:59:41,890 INFO [RS:0;jenkins-hbase4:33129] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 19:59:41,891 INFO [RS:0;jenkins-hbase4:33129] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 19:59:41,891 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 19:59:41,891 INFO [RS:0;jenkins-hbase4:33129] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 19:59:41,891 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(3303): Received CLOSE for a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:59:41,891 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(3303): Received CLOSE for a1ac9b963e8c787c5be46bf984611c9b 2023-05-30 19:59:41,891 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:41,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a2b7429fbc533acc65ccb6c943c75f09, disabling compactions & flushes 2023-05-30 19:59:41,891 DEBUG [RS:0;jenkins-hbase4:33129] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3fe1478a to 127.0.0.1:61525 2023-05-30 19:59:41,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:41,891 DEBUG [RS:0;jenkins-hbase4:33129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:59:41,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:41,892 INFO [RS:0;jenkins-hbase4:33129] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 19:59:41,892 INFO [RS:0;jenkins-hbase4:33129] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 19:59:41,892 INFO [RS:0;jenkins-hbase4:33129] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 19:59:41,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. after waiting 0 ms 2023-05-30 19:59:41,892 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 19:59:41,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:41,892 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a2b7429fbc533acc65ccb6c943c75f09 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 19:59:41,892 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-30 19:59:41,892 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1478): Online Regions={a2b7429fbc533acc65ccb6c943c75f09=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09., 1588230740=hbase:meta,,1.1588230740, a1ac9b963e8c787c5be46bf984611c9b=hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b.} 2023-05-30 19:59:41,892 DEBUG [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1504): Waiting on 1588230740, a1ac9b963e8c787c5be46bf984611c9b, a2b7429fbc533acc65ccb6c943c75f09 2023-05-30 19:59:41,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:59:41,893 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:59:41,893 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:59:41,893 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:59:41,893 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:59:41,893 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-30 19:59:41,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/42f91e25ab184afdb9113d4268631720 2023-05-30 19:59:41,911 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/.tmp/info/2a8b6b7585e043d3a46dfb06ffbcacd0 2023-05-30 19:59:41,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/.tmp/info/42f91e25ab184afdb9113d4268631720 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/42f91e25ab184afdb9113d4268631720 2023-05-30 19:59:41,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/42f91e25ab184afdb9113d4268631720, entries=1, sequenceid=22, filesize=5.8 K 2023-05-30 19:59:41,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a2b7429fbc533acc65ccb6c943c75f09 in 29ms, sequenceid=22, compaction requested=true 2023-05-30 19:59:41,924 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6] to archive 2023-05-30 19:59:41,925 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-30 19:59:41,927 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4 to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/fa89f6f994884f2b910f6c0e3a59f6b4 2023-05-30 19:59:41,928 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/.tmp/table/73359b230c664ebba9c21d790665a634 2023-05-30 19:59:41,929 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/da5aab59bb4b47ecac5efd05a83f013a 2023-05-30 19:59:41,930 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6 to hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/info/cec49a2a4dac4d2994090b89bf508bf6 2023-05-30 19:59:41,937 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/.tmp/info/2a8b6b7585e043d3a46dfb06ffbcacd0 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/info/2a8b6b7585e043d3a46dfb06ffbcacd0 2023-05-30 19:59:41,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/info/2a8b6b7585e043d3a46dfb06ffbcacd0, entries=20, sequenceid=14, filesize=7.6 K 2023-05-30 19:59:41,945 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/.tmp/table/73359b230c664ebba9c21d790665a634 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/table/73359b230c664ebba9c21d790665a634 2023-05-30 19:59:41,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/a2b7429fbc533acc65ccb6c943c75f09/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-30 19:59:41,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a2b7429fbc533acc65ccb6c943c75f09: 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685476721444.a2b7429fbc533acc65ccb6c943c75f09. 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a1ac9b963e8c787c5be46bf984611c9b, disabling compactions & flushes 2023-05-30 19:59:41,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. after waiting 0 ms 2023-05-30 19:59:41,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:59:41,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/table/73359b230c664ebba9c21d790665a634, entries=4, sequenceid=14, filesize=4.9 K 2023-05-30 19:59:41,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 62ms, sequenceid=14, compaction requested=false 2023-05-30 19:59:41,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/namespace/a1ac9b963e8c787c5be46bf984611c9b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-30 19:59:41,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:59:41,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a1ac9b963e8c787c5be46bf984611c9b: 2023-05-30 19:59:41,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685476720950.a1ac9b963e8c787c5be46bf984611c9b. 2023-05-30 19:59:41,961 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-30 19:59:41,961 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 19:59:41,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:59:41,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:59:41,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-30 19:59:42,092 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33129,1685476720403; all regions closed. 2023-05-30 19:59:42,093 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:42,099 DEBUG [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs 2023-05-30 19:59:42,100 INFO [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33129%2C1685476720403.meta:.meta(num 1685476720901) 2023-05-30 19:59:42,100 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/WALs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:42,105 DEBUG [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/oldWALs 2023-05-30 19:59:42,105 INFO [RS:0;jenkins-hbase4:33129] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33129%2C1685476720403:(num 1685476781875) 2023-05-30 19:59:42,105 DEBUG [RS:0;jenkins-hbase4:33129] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:59:42,105 INFO [RS:0;jenkins-hbase4:33129] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:59:42,105 INFO [RS:0;jenkins-hbase4:33129] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-30 19:59:42,105 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:59:42,106 INFO [RS:0;jenkins-hbase4:33129] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33129 2023-05-30 19:59:42,108 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33129,1685476720403 2023-05-30 19:59:42,108 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:59:42,108 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:59:42,110 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33129,1685476720403] 2023-05-30 19:59:42,110 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33129,1685476720403; numProcessing=1 2023-05-30 19:59:42,111 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33129,1685476720403 already deleted, retry=false 2023-05-30 19:59:42,111 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33129,1685476720403 expired; onlineServers=0 2023-05-30 19:59:42,111 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44079,1685476720363' ***** 2023-05-30 19:59:42,111 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 19:59:42,112 DEBUG [M:0;jenkins-hbase4:44079] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69a5f721, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:59:42,112 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:59:42,112 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44079,1685476720363; all regions closed. 2023-05-30 19:59:42,112 DEBUG [M:0;jenkins-hbase4:44079] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 19:59:42,112 DEBUG [M:0;jenkins-hbase4:44079] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 19:59:42,112 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 19:59:42,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476720545] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476720545,5,FailOnTimeoutGroup] 2023-05-30 19:59:42,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476720545] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476720545,5,FailOnTimeoutGroup] 2023-05-30 19:59:42,112 DEBUG [M:0;jenkins-hbase4:44079] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 19:59:42,114 INFO [M:0;jenkins-hbase4:44079] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 19:59:42,114 INFO [M:0;jenkins-hbase4:44079] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 19:59:42,114 INFO [M:0;jenkins-hbase4:44079] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 19:59:42,114 DEBUG [M:0;jenkins-hbase4:44079] master.HMaster(1512): Stopping service threads 2023-05-30 19:59:42,114 INFO [M:0;jenkins-hbase4:44079] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 19:59:42,114 ERROR [M:0;jenkins-hbase4:44079] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-30 19:59:42,115 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 19:59:42,115 INFO [M:0;jenkins-hbase4:44079] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 19:59:42,115 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:42,115 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 19:59:42,115 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:59:42,115 DEBUG [M:0;jenkins-hbase4:44079] zookeeper.ZKUtil(398): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 19:59:42,115 WARN [M:0;jenkins-hbase4:44079] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 19:59:42,115 INFO [M:0;jenkins-hbase4:44079] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 19:59:42,116 INFO [M:0;jenkins-hbase4:44079] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 19:59:42,116 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:59:42,116 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:42,116 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:42,116 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:59:42,116 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:42,116 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.90 KB heapSize=47.33 KB 2023-05-30 19:59:42,127 INFO [M:0;jenkins-hbase4:44079] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.90 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fb8bcd2eba074bfbb3b1b957a6aae825 2023-05-30 19:59:42,131 INFO [M:0;jenkins-hbase4:44079] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb8bcd2eba074bfbb3b1b957a6aae825 2023-05-30 19:59:42,132 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/fb8bcd2eba074bfbb3b1b957a6aae825 as hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fb8bcd2eba074bfbb3b1b957a6aae825 2023-05-30 19:59:42,137 INFO [M:0;jenkins-hbase4:44079] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fb8bcd2eba074bfbb3b1b957a6aae825 2023-05-30 19:59:42,137 INFO [M:0;jenkins-hbase4:44079] regionserver.HStore(1080): Added hdfs://localhost:40309/user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/fb8bcd2eba074bfbb3b1b957a6aae825, entries=11, sequenceid=100, filesize=6.1 K 2023-05-30 19:59:42,138 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegion(2948): Finished flush of dataSize ~38.90 KB/39836, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=100, compaction requested=false 2023-05-30 19:59:42,139 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:42,139 DEBUG [M:0;jenkins-hbase4:44079] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:59:42,139 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/f93000ee-5497-fbe2-cc98-917ea6d1f197/MasterData/WALs/jenkins-hbase4.apache.org,44079,1685476720363 2023-05-30 19:59:42,142 INFO [M:0;jenkins-hbase4:44079] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 19:59:42,142 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 19:59:42,142 INFO [M:0;jenkins-hbase4:44079] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44079 2023-05-30 19:59:42,145 DEBUG [M:0;jenkins-hbase4:44079] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44079,1685476720363 already deleted, retry=false 2023-05-30 19:59:42,210 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:59:42,210 INFO [RS:0;jenkins-hbase4:33129] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33129,1685476720403; zookeeper connection closed. 2023-05-30 19:59:42,210 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): regionserver:33129-0x1007dac41de0001, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:59:42,210 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1c459382] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1c459382 2023-05-30 19:59:42,210 INFO [Listener at localhost/44453] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-30 19:59:42,310 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:59:42,310 INFO [M:0;jenkins-hbase4:44079] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44079,1685476720363; zookeeper connection closed. 2023-05-30 19:59:42,310 DEBUG [Listener at localhost/44453-EventThread] zookeeper.ZKWatcher(600): master:44079-0x1007dac41de0000, quorum=127.0.0.1:61525, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 19:59:42,311 WARN [Listener at localhost/44453] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:59:42,315 INFO [Listener at localhost/44453] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:59:42,330 WARN [BP-1001563938-172.31.14.131-1685476719810 heartbeating to localhost/127.0.0.1:40309] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1001563938-172.31.14.131-1685476719810 (Datanode Uuid 71a3fb9c-d51e-4792-bbd7-f0b6665ed611) service to localhost/127.0.0.1:40309 2023-05-30 19:59:42,331 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/dfs/data/data3/current/BP-1001563938-172.31.14.131-1685476719810] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:59:42,331 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/dfs/data/data4/current/BP-1001563938-172.31.14.131-1685476719810] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:59:42,420 WARN [Listener at localhost/44453] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 19:59:42,423 INFO [Listener at localhost/44453] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:59:42,527 WARN [BP-1001563938-172.31.14.131-1685476719810 heartbeating to localhost/127.0.0.1:40309] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 19:59:42,527 WARN [BP-1001563938-172.31.14.131-1685476719810 heartbeating to localhost/127.0.0.1:40309] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1001563938-172.31.14.131-1685476719810 (Datanode Uuid 53ee8fee-ae27-49ef-95a5-37bd8cc8a039) service to localhost/127.0.0.1:40309 2023-05-30 19:59:42,527 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/dfs/data/data1/current/BP-1001563938-172.31.14.131-1685476719810] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:59:42,528 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/cluster_5548c4d3-860f-b9b7-6f8e-2be0c55aac2c/dfs/data/data2/current/BP-1001563938-172.31.14.131-1685476719810] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 19:59:42,539 INFO [Listener at localhost/44453] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 19:59:42,651 INFO [Listener at localhost/44453] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 19:59:42,658 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 19:59:42,672 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 19:59:42,682 INFO [Listener at localhost/44453] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 86) - Thread LEAK? -, OpenFileDescriptor=498 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=28 (was 43), ProcessCount=171 (was 168) - ProcessCount LEAK? -, AvailableMemoryMB=2707 (was 2917) 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=28, ProcessCount=171, AvailableMemoryMB=2707 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/hadoop.log.dir so I do NOT create it in target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/e1682b63-4939-0df4-a237-e6cdafaa98c2/hadoop.tmp.dir so I do NOT create it in target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8, deleteOnExit=true 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 19:59:42,690 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/test.cache.data in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/hadoop.log.dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 19:59:42,691 DEBUG [Listener at localhost/44453] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 19:59:42,691 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/nfs.dump.dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/java.io.tmpdir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 19:59:42,692 INFO [Listener at localhost/44453] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 19:59:42,694 WARN [Listener at localhost/44453] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:59:42,696 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:59:42,697 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:59:42,733 WARN [Listener at localhost/44453] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:59:42,735 INFO [Listener at localhost/44453] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:59:42,739 INFO [Listener at localhost/44453] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/java.io.tmpdir/Jetty_localhost_45977_hdfs____.i1ltwe/webapp 2023-05-30 19:59:42,828 INFO [Listener at localhost/44453] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45977 2023-05-30 19:59:42,830 WARN [Listener at localhost/44453] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 19:59:42,832 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 19:59:42,833 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 19:59:42,870 WARN [Listener at localhost/40151] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:59:42,879 WARN [Listener at localhost/40151] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:59:42,881 WARN [Listener at localhost/40151] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:59:42,882 INFO [Listener at localhost/40151] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:59:42,887 INFO [Listener at localhost/40151] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/java.io.tmpdir/Jetty_localhost_44083_datanode____.v1q0a6/webapp 2023-05-30 19:59:42,977 INFO [Listener at localhost/40151] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44083 2023-05-30 19:59:42,983 WARN [Listener at localhost/36811] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:59:42,996 WARN [Listener at localhost/36811] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 19:59:42,998 WARN [Listener at localhost/36811] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 19:59:42,999 INFO [Listener at localhost/36811] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 19:59:43,003 INFO [Listener at localhost/36811] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/java.io.tmpdir/Jetty_localhost_33069_datanode____dnibqy/webapp 2023-05-30 19:59:43,081 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4011712613f54eb: Processing first storage report for DS-bcad548f-636f-46dc-97bf-f8b4d6111790 from datanode 89e79c3a-f42a-4064-8b85-2a202d55775f 2023-05-30 19:59:43,081 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4011712613f54eb: from storage DS-bcad548f-636f-46dc-97bf-f8b4d6111790 node DatanodeRegistration(127.0.0.1:33913, datanodeUuid=89e79c3a-f42a-4064-8b85-2a202d55775f, infoPort=38895, infoSecurePort=0, ipcPort=36811, storageInfo=lv=-57;cid=testClusterID;nsid=1000445791;c=1685476782699), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:59:43,081 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4011712613f54eb: Processing first storage report for DS-7dfa036a-003c-4f0f-bcae-8940abadf356 from datanode 89e79c3a-f42a-4064-8b85-2a202d55775f 2023-05-30 19:59:43,081 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4011712613f54eb: from storage DS-7dfa036a-003c-4f0f-bcae-8940abadf356 node DatanodeRegistration(127.0.0.1:33913, datanodeUuid=89e79c3a-f42a-4064-8b85-2a202d55775f, infoPort=38895, infoSecurePort=0, ipcPort=36811, storageInfo=lv=-57;cid=testClusterID;nsid=1000445791;c=1685476782699), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:59:43,102 INFO [Listener at localhost/36811] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33069 2023-05-30 19:59:43,109 WARN [Listener at localhost/40695] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 19:59:43,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x85c85ae327fe81e7: Processing first storage report for DS-7e8f37fb-4d42-412f-9653-a65b64169a89 from datanode 5a352b9d-0079-4b86-b44a-0a59573525b9 2023-05-30 19:59:43,204 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x85c85ae327fe81e7: from storage DS-7e8f37fb-4d42-412f-9653-a65b64169a89 node DatanodeRegistration(127.0.0.1:39865, datanodeUuid=5a352b9d-0079-4b86-b44a-0a59573525b9, infoPort=36685, infoSecurePort=0, ipcPort=40695, storageInfo=lv=-57;cid=testClusterID;nsid=1000445791;c=1685476782699), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:59:43,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x85c85ae327fe81e7: Processing first storage report for DS-76d85ab4-687b-4c6b-a7dc-9bdb0abb0b7e from datanode 5a352b9d-0079-4b86-b44a-0a59573525b9 2023-05-30 19:59:43,204 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x85c85ae327fe81e7: from storage DS-76d85ab4-687b-4c6b-a7dc-9bdb0abb0b7e node DatanodeRegistration(127.0.0.1:39865, datanodeUuid=5a352b9d-0079-4b86-b44a-0a59573525b9, infoPort=36685, infoSecurePort=0, ipcPort=40695, storageInfo=lv=-57;cid=testClusterID;nsid=1000445791;c=1685476782699), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 19:59:43,215 DEBUG [Listener at localhost/40695] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca 2023-05-30 19:59:43,217 INFO [Listener at localhost/40695] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/zookeeper_0, clientPort=49181, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 19:59:43,218 INFO [Listener at localhost/40695] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49181 2023-05-30 19:59:43,218 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,219 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,233 INFO [Listener at localhost/40695] util.FSUtils(471): Created version file at hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616 with version=8 2023-05-30 19:59:43,233 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 19:59:43,234 INFO [Listener at localhost/40695] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:59:43,234 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,235 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,235 INFO [Listener at localhost/40695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:59:43,235 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,235 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:59:43,235 INFO [Listener at localhost/40695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:59:43,236 INFO [Listener at localhost/40695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41009 2023-05-30 19:59:43,237 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,237 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,238 INFO [Listener at localhost/40695] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41009 connecting to ZooKeeper ensemble=127.0.0.1:49181 2023-05-30 19:59:43,244 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:410090x0, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:59:43,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41009-0x1007dad37730000 connected 2023-05-30 19:59:43,259 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:59:43,259 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:59:43,259 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:59:43,260 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41009 2023-05-30 19:59:43,260 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41009 2023-05-30 19:59:43,260 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41009 2023-05-30 19:59:43,260 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41009 2023-05-30 19:59:43,260 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41009 2023-05-30 19:59:43,261 INFO [Listener at localhost/40695] master.HMaster(444): hbase.rootdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616, hbase.cluster.distributed=false 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 19:59:43,273 INFO [Listener at localhost/40695] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 19:59:43,275 INFO [Listener at localhost/40695] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45089 2023-05-30 19:59:43,275 INFO [Listener at localhost/40695] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 19:59:43,277 DEBUG [Listener at localhost/40695] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 19:59:43,277 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,278 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,279 INFO [Listener at localhost/40695] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45089 connecting to ZooKeeper ensemble=127.0.0.1:49181 2023-05-30 19:59:43,283 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:450890x0, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 19:59:43,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45089-0x1007dad37730001 connected 2023-05-30 19:59:43,284 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 19:59:43,285 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 19:59:43,285 DEBUG [Listener at localhost/40695] zookeeper.ZKUtil(164): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 19:59:43,286 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45089 2023-05-30 19:59:43,286 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45089 2023-05-30 19:59:43,286 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45089 2023-05-30 19:59:43,287 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45089 2023-05-30 19:59:43,287 DEBUG [Listener at localhost/40695] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45089 2023-05-30 19:59:43,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,289 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:59:43,289 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,291 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:59:43,291 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 19:59:43,291 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:59:43,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41009,1685476783234 from backup master directory 2023-05-30 19:59:43,292 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 19:59:43,294 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,294 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:59:43,294 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 19:59:43,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,305 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/hbase.id with ID: ce8f5d12-63e2-4b74-a79f-508f876650f3 2023-05-30 19:59:43,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:43,317 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x16d79fc6 to 127.0.0.1:49181 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:59:43,332 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@329c1dd6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:59:43,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:59:43,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 19:59:43,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:59:43,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store-tmp 2023-05-30 19:59:43,340 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 19:59:43,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:43,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:59:43,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/WALs/jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,344 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41009%2C1685476783234, suffix=, logDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/WALs/jenkins-hbase4.apache.org,41009,1685476783234, archiveDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/oldWALs, maxLogs=10 2023-05-30 19:59:43,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/WALs/jenkins-hbase4.apache.org,41009,1685476783234/jenkins-hbase4.apache.org%2C41009%2C1685476783234.1685476783344 2023-05-30 19:59:43,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39865,DS-7e8f37fb-4d42-412f-9653-a65b64169a89,DISK], DatanodeInfoWithStorage[127.0.0.1:33913,DS-bcad548f-636f-46dc-97bf-f8b4d6111790,DISK]] 2023-05-30 19:59:43,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:59:43,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:43,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,351 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,352 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 19:59:43,352 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 19:59:43,353 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 19:59:43,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:59:43,359 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=690044, jitterRate=-0.12256412208080292}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:59:43,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 19:59:43,359 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 19:59:43,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 19:59:43,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 19:59:43,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 19:59:43,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-30 19:59:43,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-30 19:59:43,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 19:59:43,361 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 19:59:43,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 19:59:43,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 19:59:43,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 19:59:43,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 19:59:43,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 19:59:43,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 19:59:43,375 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 19:59:43,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 19:59:43,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 19:59:43,378 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:59:43,378 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 19:59:43,378 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41009,1685476783234, sessionid=0x1007dad37730000, setting cluster-up flag (Was=false) 2023-05-30 19:59:43,383 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 19:59:43,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,391 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 19:59:43,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:43,397 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.hbase-snapshot/.tmp 2023-05-30 19:59:43,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:59:43,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,401 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476813401 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 19:59:43,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 19:59:43,403 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 19:59:43,403 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 19:59:43,403 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476783403,5,FailOnTimeoutGroup] 2023-05-30 19:59:43,403 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476783403,5,FailOnTimeoutGroup] 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 19:59:43,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,404 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:59:43,414 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:59:43,415 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 19:59:43,415 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616 2023-05-30 19:59:43,422 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:43,423 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:59:43,424 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info 2023-05-30 19:59:43,425 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:59:43,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:59:43,427 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:59:43,427 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:59:43,428 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,428 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:59:43,429 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/table 2023-05-30 19:59:43,429 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:59:43,430 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,430 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740 2023-05-30 19:59:43,431 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740 2023-05-30 19:59:43,433 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:59:43,434 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:59:43,436 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:59:43,436 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=776141, jitterRate=-0.013085916638374329}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 19:59:43,437 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 19:59:43,437 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 19:59:43,437 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 19:59:43,438 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 19:59:43,438 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 19:59:43,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 19:59:43,440 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 19:59:43,441 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 19:59:43,489 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(951): ClusterId : ce8f5d12-63e2-4b74-a79f-508f876650f3 2023-05-30 19:59:43,489 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 19:59:43,492 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 19:59:43,492 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 19:59:43,493 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 19:59:43,494 DEBUG [RS:0;jenkins-hbase4:45089] zookeeper.ReadOnlyZKClient(139): Connect 0x4064231c to 127.0.0.1:49181 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:59:43,499 DEBUG [RS:0;jenkins-hbase4:45089] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ea3d560, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:59:43,499 DEBUG [RS:0;jenkins-hbase4:45089] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a9c486c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 19:59:43,507 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45089 2023-05-30 19:59:43,507 INFO [RS:0;jenkins-hbase4:45089] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 19:59:43,507 INFO [RS:0;jenkins-hbase4:45089] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 19:59:43,507 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 19:59:43,508 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,41009,1685476783234 with isa=jenkins-hbase4.apache.org/172.31.14.131:45089, startcode=1685476783273 2023-05-30 19:59:43,508 DEBUG [RS:0;jenkins-hbase4:45089] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 19:59:43,511 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39229, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 19:59:43,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,512 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616 2023-05-30 19:59:43,512 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40151 2023-05-30 19:59:43,512 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 19:59:43,514 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 19:59:43,514 DEBUG [RS:0;jenkins-hbase4:45089] zookeeper.ZKUtil(162): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,514 WARN [RS:0;jenkins-hbase4:45089] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 19:59:43,514 INFO [RS:0;jenkins-hbase4:45089] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:59:43,514 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,515 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45089,1685476783273] 2023-05-30 19:59:43,518 DEBUG [RS:0;jenkins-hbase4:45089] zookeeper.ZKUtil(162): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,518 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 19:59:43,519 INFO [RS:0;jenkins-hbase4:45089] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 19:59:43,520 INFO [RS:0;jenkins-hbase4:45089] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 19:59:43,520 INFO [RS:0;jenkins-hbase4:45089] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 19:59:43,520 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,520 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 19:59:43,521 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,522 DEBUG [RS:0;jenkins-hbase4:45089] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 19:59:43,523 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,523 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,523 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,534 INFO [RS:0;jenkins-hbase4:45089] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 19:59:43,534 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45089,1685476783273-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,544 INFO [RS:0;jenkins-hbase4:45089] regionserver.Replication(203): jenkins-hbase4.apache.org,45089,1685476783273 started 2023-05-30 19:59:43,544 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45089,1685476783273, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45089, sessionid=0x1007dad37730001 2023-05-30 19:59:43,544 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 19:59:43,544 DEBUG [RS:0;jenkins-hbase4:45089] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45089,1685476783273' 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45089,1685476783273' 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 19:59:43,545 DEBUG [RS:0;jenkins-hbase4:45089] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 19:59:43,546 DEBUG [RS:0;jenkins-hbase4:45089] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 19:59:43,546 INFO [RS:0;jenkins-hbase4:45089] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 19:59:43,546 INFO [RS:0;jenkins-hbase4:45089] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 19:59:43,591 DEBUG [jenkins-hbase4:41009] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 19:59:43,592 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45089,1685476783273, state=OPENING 2023-05-30 19:59:43,593 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 19:59:43,594 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:43,595 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:59:43,595 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 19:59:43,648 INFO [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45089%2C1685476783273, suffix=, logDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273, archiveDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs, maxLogs=32 2023-05-30 19:59:43,655 INFO [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476783648 2023-05-30 19:59:43,655 DEBUG [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39865,DS-7e8f37fb-4d42-412f-9653-a65b64169a89,DISK], DatanodeInfoWithStorage[127.0.0.1:33913,DS-bcad548f-636f-46dc-97bf-f8b4d6111790,DISK]] 2023-05-30 19:59:43,748 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,748 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 19:59:43,751 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60120, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 19:59:43,754 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 19:59:43,754 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 19:59:43,756 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45089%2C1685476783273.meta, suffix=.meta, logDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273, archiveDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs, maxLogs=32 2023-05-30 19:59:43,762 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.meta.1685476783756.meta 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33913,DS-bcad548f-636f-46dc-97bf-f8b4d6111790,DISK], DatanodeInfoWithStorage[127.0.0.1:39865,DS-7e8f37fb-4d42-412f-9653-a65b64169a89,DISK]] 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 19:59:43,762 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 19:59:43,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 19:59:43,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 19:59:43,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info 2023-05-30 19:59:43,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info 2023-05-30 19:59:43,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 19:59:43,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 19:59:43,766 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:59:43,766 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/rep_barrier 2023-05-30 19:59:43,766 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 19:59:43,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 19:59:43,768 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/table 2023-05-30 19:59:43,768 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/table 2023-05-30 19:59:43,768 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 19:59:43,768 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:43,769 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740 2023-05-30 19:59:43,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740 2023-05-30 19:59:43,772 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 19:59:43,773 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 19:59:43,773 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=830478, jitterRate=0.05600857734680176}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 19:59:43,774 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 19:59:43,775 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476783748 2023-05-30 19:59:43,778 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 19:59:43,779 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 19:59:43,779 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45089,1685476783273, state=OPEN 2023-05-30 19:59:43,782 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 19:59:43,782 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 19:59:43,784 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 19:59:43,784 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45089,1685476783273 in 187 msec 2023-05-30 19:59:43,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 19:59:43,786 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 346 msec 2023-05-30 19:59:43,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 389 msec 2023-05-30 19:59:43,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476783788, completionTime=-1 2023-05-30 19:59:43,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 19:59:43,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 19:59:43,790 DEBUG [hconnection-0xdfdefef-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:59:43,793 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60128, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:59:43,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 19:59:43,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476843794 2023-05-30 19:59:43,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685476903794 2023-05-30 19:59:43,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-30 19:59:43,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41009,1685476783234-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41009,1685476783234-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41009,1685476783234-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41009, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 19:59:43,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 19:59:43,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 19:59:43,802 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 19:59:43,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 19:59:43,803 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:59:43,804 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:59:43,806 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/hbase/namespace/1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:43,806 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/hbase/namespace/1e5f39411632247dfb17e864603d997c empty. 2023-05-30 19:59:43,806 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/hbase/namespace/1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:43,807 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 19:59:43,816 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 19:59:43,817 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1e5f39411632247dfb17e864603d997c, NAME => 'hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1e5f39411632247dfb17e864603d997c, disabling compactions & flushes 2023-05-30 19:59:43,824 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. after waiting 0 ms 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:43,824 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:43,824 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1e5f39411632247dfb17e864603d997c: 2023-05-30 19:59:43,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:59:43,827 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476783827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476783827"}]},"ts":"1685476783827"} 2023-05-30 19:59:43,830 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:59:43,830 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:59:43,831 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476783831"}]},"ts":"1685476783831"} 2023-05-30 19:59:43,832 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 19:59:43,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1e5f39411632247dfb17e864603d997c, ASSIGN}] 2023-05-30 19:59:43,840 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1e5f39411632247dfb17e864603d997c, ASSIGN 2023-05-30 19:59:43,841 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1e5f39411632247dfb17e864603d997c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45089,1685476783273; forceNewPlan=false, retain=false 2023-05-30 19:59:43,992 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1e5f39411632247dfb17e864603d997c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:43,992 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476783992"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476783992"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476783992"}]},"ts":"1685476783992"} 2023-05-30 19:59:43,994 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 1e5f39411632247dfb17e864603d997c, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 19:59:44,150 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:44,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1e5f39411632247dfb17e864603d997c, NAME => 'hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:59:44,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:44,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,152 INFO [StoreOpener-1e5f39411632247dfb17e864603d997c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,153 DEBUG [StoreOpener-1e5f39411632247dfb17e864603d997c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/info 2023-05-30 19:59:44,153 DEBUG [StoreOpener-1e5f39411632247dfb17e864603d997c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/info 2023-05-30 19:59:44,153 INFO [StoreOpener-1e5f39411632247dfb17e864603d997c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1e5f39411632247dfb17e864603d997c columnFamilyName info 2023-05-30 19:59:44,154 INFO [StoreOpener-1e5f39411632247dfb17e864603d997c-1] regionserver.HStore(310): Store=1e5f39411632247dfb17e864603d997c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:44,154 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,155 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1e5f39411632247dfb17e864603d997c 2023-05-30 19:59:44,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:59:44,159 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1e5f39411632247dfb17e864603d997c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=744626, jitterRate=-0.05315932631492615}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:59:44,159 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1e5f39411632247dfb17e864603d997c: 2023-05-30 19:59:44,160 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c., pid=6, masterSystemTime=1685476784146 2023-05-30 19:59:44,162 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:44,162 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 19:59:44,163 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1e5f39411632247dfb17e864603d997c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:44,163 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476784163"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476784163"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476784163"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476784163"}]},"ts":"1685476784163"} 2023-05-30 19:59:44,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 19:59:44,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 1e5f39411632247dfb17e864603d997c, server=jenkins-hbase4.apache.org,45089,1685476783273 in 171 msec 2023-05-30 19:59:44,169 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 19:59:44,169 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1e5f39411632247dfb17e864603d997c, ASSIGN in 328 msec 2023-05-30 19:59:44,170 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:59:44,170 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476784170"}]},"ts":"1685476784170"} 2023-05-30 19:59:44,171 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 19:59:44,174 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:59:44,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 373 msec 2023-05-30 19:59:44,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 19:59:44,205 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:59:44,205 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:44,209 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 19:59:44,222 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:59:44,226 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-05-30 19:59:44,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 19:59:44,238 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 19:59:44,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-30 19:59:44,256 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 19:59:44,258 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 19:59:44,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.964sec 2023-05-30 19:59:44,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 19:59:44,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 19:59:44,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 19:59:44,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41009,1685476783234-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 19:59:44,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41009,1685476783234-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 19:59:44,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 19:59:44,289 DEBUG [Listener at localhost/40695] zookeeper.ReadOnlyZKClient(139): Connect 0x5a82f05b to 127.0.0.1:49181 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 19:59:44,295 DEBUG [Listener at localhost/40695] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f60a908, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 19:59:44,297 DEBUG [hconnection-0x1e774d56-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 19:59:44,300 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 19:59:44,301 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 19:59:44,301 INFO [Listener at localhost/40695] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 19:59:44,304 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 19:59:44,304 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 19:59:44,305 INFO [Listener at localhost/40695] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 19:59:44,307 DEBUG [Listener at localhost/40695] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-30 19:59:44,309 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-30 19:59:44,310 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-30 19:59:44,310 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-30 19:59:44,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 19:59:44,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-30 19:59:44,318 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 19:59:44,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-30 19:59:44,319 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 19:59:44,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:59:44,321 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,321 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c empty. 2023-05-30 19:59:44,322 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,322 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-30 19:59:44,332 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-30 19:59:44,333 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c4f1eed95fcff6228404d8e96f348e3c, NAME => 'TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/.tmp 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing c4f1eed95fcff6228404d8e96f348e3c, disabling compactions & flushes 2023-05-30 19:59:44,340 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. after waiting 0 ms 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,340 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,340 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 19:59:44,342 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 19:59:44,343 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476784343"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476784343"}]},"ts":"1685476784343"} 2023-05-30 19:59:44,344 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 19:59:44,345 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 19:59:44,345 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476784345"}]},"ts":"1685476784345"} 2023-05-30 19:59:44,347 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-30 19:59:44,350 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, ASSIGN}] 2023-05-30 19:59:44,352 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, ASSIGN 2023-05-30 19:59:44,353 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45089,1685476783273; forceNewPlan=false, retain=false 2023-05-30 19:59:44,504 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c4f1eed95fcff6228404d8e96f348e3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:44,504 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476784503"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476784503"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476784503"}]},"ts":"1685476784503"} 2023-05-30 19:59:44,506 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 19:59:44,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c4f1eed95fcff6228404d8e96f348e3c, NAME => 'TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.', STARTKEY => '', ENDKEY => ''} 2023-05-30 19:59:44,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 19:59:44,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,663 INFO [StoreOpener-c4f1eed95fcff6228404d8e96f348e3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,664 DEBUG [StoreOpener-c4f1eed95fcff6228404d8e96f348e3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info 2023-05-30 19:59:44,664 DEBUG [StoreOpener-c4f1eed95fcff6228404d8e96f348e3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info 2023-05-30 19:59:44,665 INFO [StoreOpener-c4f1eed95fcff6228404d8e96f348e3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c4f1eed95fcff6228404d8e96f348e3c columnFamilyName info 2023-05-30 19:59:44,665 INFO [StoreOpener-c4f1eed95fcff6228404d8e96f348e3c-1] regionserver.HStore(310): Store=c4f1eed95fcff6228404d8e96f348e3c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 19:59:44,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:44,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 19:59:44,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c4f1eed95fcff6228404d8e96f348e3c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=869630, jitterRate=0.10579195618629456}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 19:59:44,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 19:59:44,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c., pid=11, masterSystemTime=1685476784658 2023-05-30 19:59:44,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,673 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:44,673 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c4f1eed95fcff6228404d8e96f348e3c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:44,674 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476784673"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476784673"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476784673"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476784673"}]},"ts":"1685476784673"} 2023-05-30 19:59:44,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-30 19:59:44,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 in 169 msec 2023-05-30 19:59:44,679 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-30 19:59:44,679 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, ASSIGN in 327 msec 2023-05-30 19:59:44,680 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 19:59:44,680 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476784680"}]},"ts":"1685476784680"} 2023-05-30 19:59:44,682 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-30 19:59:44,684 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 19:59:44,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 373 msec 2023-05-30 19:59:47,493 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 19:59:49,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-30 19:59:49,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-30 19:59:49,520 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-30 19:59:54,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41009] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-30 19:59:54,321 INFO [Listener at localhost/40695] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-30 19:59:54,323 DEBUG [Listener at localhost/40695] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-30 19:59:54,323 DEBUG [Listener at localhost/40695] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 19:59:54,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 19:59:54,337 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 19:59:54,352 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/799634bc483d47788f7fbf22ebe44c37 2023-05-30 19:59:54,361 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/799634bc483d47788f7fbf22ebe44c37 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37 2023-05-30 19:59:54,368 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-30 19:59:54,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] ipc.CallRunner(144): callId: 38 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476804367, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 19:59:54,369 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37, entries=7, sequenceid=11, filesize=12.1 K 2023-05-30 19:59:54,370 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for c4f1eed95fcff6228404d8e96f348e3c in 33ms, sequenceid=11, compaction requested=false 2023-05-30 19:59:54,370 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:04,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:04,374 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-30 20:00:04,392 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/db664a67a8854dc883a5391877ad10a0 2023-05-30 20:00:04,400 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/db664a67a8854dc883a5391877ad10a0 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 2023-05-30 20:00:04,405 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0, entries=23, sequenceid=37, filesize=29.0 K 2023-05-30 20:00:04,406 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=2.10 KB/2152 for c4f1eed95fcff6228404d8e96f348e3c in 32ms, sequenceid=37, compaction requested=false 2023-05-30 20:00:04,406 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:04,406 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=41.1 K, sizeToCheck=16.0 K 2023-05-30 20:00:04,406 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:04,406 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 because midkey is the same as first or last row 2023-05-30 20:00:06,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:06,387 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 20:00:06,398 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=47 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/234b222e6ffe4ff88dcaf682b685d2cc 2023-05-30 20:00:06,404 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/234b222e6ffe4ff88dcaf682b685d2cc as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc 2023-05-30 20:00:06,410 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc, entries=7, sequenceid=47, filesize=12.1 K 2023-05-30 20:00:06,410 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=17.86 KB/18292 for c4f1eed95fcff6228404d8e96f348e3c in 23ms, sequenceid=47, compaction requested=true 2023-05-30 20:00:06,411 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:06,411 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=53.2 K, sizeToCheck=16.0 K 2023-05-30 20:00:06,411 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:06,411 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 because midkey is the same as first or last row 2023-05-30 20:00:06,411 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:06,411 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:06,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:06,412 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-30 20:00:06,413 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 54449 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:00:06,413 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): c4f1eed95fcff6228404d8e96f348e3c/info is initiating minor compaction (all files) 2023-05-30 20:00:06,413 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c4f1eed95fcff6228404d8e96f348e3c/info in TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:06,413 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp, totalSize=53.2 K 2023-05-30 20:00:06,414 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 799634bc483d47788f7fbf22ebe44c37, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685476794327 2023-05-30 20:00:06,415 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting db664a67a8854dc883a5391877ad10a0, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=37, earliestPutTs=1685476794338 2023-05-30 20:00:06,415 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 234b222e6ffe4ff88dcaf682b685d2cc, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685476804375 2023-05-30 20:00:06,437 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=68 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/67e557bad71f4e56b05a4e50b92b7aa0 2023-05-30 20:00:06,442 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): c4f1eed95fcff6228404d8e96f348e3c#info#compaction#30 average throughput is 18.98 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:06,444 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/67e557bad71f4e56b05a4e50b92b7aa0 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0 2023-05-30 20:00:06,451 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0, entries=18, sequenceid=68, filesize=23.7 K 2023-05-30 20:00:06,452 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=9.46 KB/9684 for c4f1eed95fcff6228404d8e96f348e3c in 40ms, sequenceid=68, compaction requested=false 2023-05-30 20:00:06,453 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:06,453 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=76.9 K, sizeToCheck=16.0 K 2023-05-30 20:00:06,453 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:06,453 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 because midkey is the same as first or last row 2023-05-30 20:00:06,455 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/b22b6a0727674bc98b73dd86ae268c0e as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e 2023-05-30 20:00:06,461 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c4f1eed95fcff6228404d8e96f348e3c/info of c4f1eed95fcff6228404d8e96f348e3c into b22b6a0727674bc98b73dd86ae268c0e(size=43.8 K), total size for store is 67.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:06,461 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:06,461 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c., storeName=c4f1eed95fcff6228404d8e96f348e3c/info, priority=13, startTime=1685476806411; duration=0sec 2023-05-30 20:00:06,461 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=67.5 K, sizeToCheck=16.0 K 2023-05-30 20:00:06,461 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:06,462 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e because midkey is the same as first or last row 2023-05-30 20:00:06,462 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:08,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,441 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-30 20:00:08,453 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=82 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/b1fe7d9c1b814aaf98faf5b164ac6956 2023-05-30 20:00:08,459 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/b1fe7d9c1b814aaf98faf5b164ac6956 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956 2023-05-30 20:00:08,464 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956, entries=10, sequenceid=82, filesize=15.3 K 2023-05-30 20:00:08,465 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=18.91 KB/19368 for c4f1eed95fcff6228404d8e96f348e3c in 24ms, sequenceid=82, compaction requested=true 2023-05-30 20:00:08,465 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:08,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,465 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.8 K, sizeToCheck=16.0 K 2023-05-30 20:00:08,465 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:08,465 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e because midkey is the same as first or last row 2023-05-30 20:00:08,465 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:08,465 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:08,465 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-30 20:00:08,467 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 84764 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:00:08,467 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): c4f1eed95fcff6228404d8e96f348e3c/info is initiating minor compaction (all files) 2023-05-30 20:00:08,467 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c4f1eed95fcff6228404d8e96f348e3c/info in TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:08,467 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp, totalSize=82.8 K 2023-05-30 20:00:08,467 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting b22b6a0727674bc98b73dd86ae268c0e, keycount=37, bloomtype=ROW, size=43.8 K, encoding=NONE, compression=NONE, seqNum=47, earliestPutTs=1685476794327 2023-05-30 20:00:08,468 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 67e557bad71f4e56b05a4e50b92b7aa0, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=68, earliestPutTs=1685476806388 2023-05-30 20:00:08,468 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting b1fe7d9c1b814aaf98faf5b164ac6956, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685476806412 2023-05-30 20:00:08,477 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=104 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/6c4c66d37d464581b774a8c54d7a2feb 2023-05-30 20:00:08,480 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-30 20:00:08,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] ipc.CallRunner(144): callId: 105 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476818480, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:08,483 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): c4f1eed95fcff6228404d8e96f348e3c#info#compaction#33 average throughput is 22.23 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:08,484 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/6c4c66d37d464581b774a8c54d7a2feb as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/6c4c66d37d464581b774a8c54d7a2feb 2023-05-30 20:00:08,493 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/6c4c66d37d464581b774a8c54d7a2feb, entries=19, sequenceid=104, filesize=24.7 K 2023-05-30 20:00:08,494 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=10.51 KB/10760 for c4f1eed95fcff6228404d8e96f348e3c in 29ms, sequenceid=104, compaction requested=false 2023-05-30 20:00:08,494 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:08,494 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=107.5 K, sizeToCheck=16.0 K 2023-05-30 20:00:08,494 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:08,494 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e because midkey is the same as first or last row 2023-05-30 20:00:08,497 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/effaf41edca5442284b8b2ec7cf0639f as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f 2023-05-30 20:00:08,502 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c4f1eed95fcff6228404d8e96f348e3c/info of c4f1eed95fcff6228404d8e96f348e3c into effaf41edca5442284b8b2ec7cf0639f(size=73.5 K), total size for store is 98.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:08,502 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:08,502 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c., storeName=c4f1eed95fcff6228404d8e96f348e3c/info, priority=13, startTime=1685476808465; duration=0sec 2023-05-30 20:00:08,502 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=98.3 K, sizeToCheck=16.0 K 2023-05-30 20:00:08,502 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-30 20:00:08,503 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:08,503 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:08,504 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41009] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,45089,1685476783273, parent={ENCODED => c4f1eed95fcff6228404d8e96f348e3c, NAME => 'TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-30 20:00:08,511 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41009] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:08,517 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41009] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c4f1eed95fcff6228404d8e96f348e3c, daughterA=d535d549ce6c92a858104678d8e7b2b3, daughterB=a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:08,518 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c4f1eed95fcff6228404d8e96f348e3c, daughterA=d535d549ce6c92a858104678d8e7b2b3, daughterB=a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:08,518 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c4f1eed95fcff6228404d8e96f348e3c, daughterA=d535d549ce6c92a858104678d8e7b2b3, daughterB=a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:08,518 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c4f1eed95fcff6228404d8e96f348e3c, daughterA=d535d549ce6c92a858104678d8e7b2b3, daughterB=a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:08,526 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, UNASSIGN}] 2023-05-30 20:00:08,527 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, UNASSIGN 2023-05-30 20:00:08,528 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c4f1eed95fcff6228404d8e96f348e3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:08,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476808527"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476808527"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476808527"}]},"ts":"1685476808527"} 2023-05-30 20:00:08,529 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 20:00:08,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c4f1eed95fcff6228404d8e96f348e3c, disabling compactions & flushes 2023-05-30 20:00:08,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:08,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:08,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. after waiting 0 ms 2023-05-30 20:00:08,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:08,687 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c4f1eed95fcff6228404d8e96f348e3c 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-30 20:00:08,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/1c727c7c222b4d3c8334fef29fe361b2 2023-05-30 20:00:08,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.tmp/info/1c727c7c222b4d3c8334fef29fe361b2 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/1c727c7c222b4d3c8334fef29fe361b2 2023-05-30 20:00:08,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/1c727c7c222b4d3c8334fef29fe361b2, entries=10, sequenceid=118, filesize=15.3 K 2023-05-30 20:00:08,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for c4f1eed95fcff6228404d8e96f348e3c in 21ms, sequenceid=118, compaction requested=true 2023-05-30 20:00:08,714 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956] to archive 2023-05-30 20:00:08,715 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-30 20:00:08,717 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/799634bc483d47788f7fbf22ebe44c37 2023-05-30 20:00:08,718 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/db664a67a8854dc883a5391877ad10a0 2023-05-30 20:00:08,719 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b22b6a0727674bc98b73dd86ae268c0e 2023-05-30 20:00:08,720 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/234b222e6ffe4ff88dcaf682b685d2cc 2023-05-30 20:00:08,721 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/67e557bad71f4e56b05a4e50b92b7aa0 2023-05-30 20:00:08,722 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/b1fe7d9c1b814aaf98faf5b164ac6956 2023-05-30 20:00:08,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=1 2023-05-30 20:00:08,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. 2023-05-30 20:00:08,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c4f1eed95fcff6228404d8e96f348e3c: 2023-05-30 20:00:08,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,731 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c4f1eed95fcff6228404d8e96f348e3c, regionState=CLOSED 2023-05-30 20:00:08,731 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476808731"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476808731"}]},"ts":"1685476808731"} 2023-05-30 20:00:08,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-30 20:00:08,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure c4f1eed95fcff6228404d8e96f348e3c, server=jenkins-hbase4.apache.org,45089,1685476783273 in 204 msec 2023-05-30 20:00:08,736 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-30 20:00:08,736 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c4f1eed95fcff6228404d8e96f348e3c, UNASSIGN in 210 msec 2023-05-30 20:00:08,751 INFO [PEWorker-5] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=c4f1eed95fcff6228404d8e96f348e3c, threads=3 2023-05-30 20:00:08,752 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/1c727c7c222b4d3c8334fef29fe361b2 for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,752 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/6c4c66d37d464581b774a8c54d7a2feb for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,752 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,761 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/1c727c7c222b4d3c8334fef29fe361b2, top=true 2023-05-30 20:00:08,761 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/6c4c66d37d464581b774a8c54d7a2feb, top=true 2023-05-30 20:00:08,766 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.splits/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2 for child: a9d413521ebbabcab40632f4b2d413b1, parent: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,766 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/1c727c7c222b4d3c8334fef29fe361b2 for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,779 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/.splits/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb for child: a9d413521ebbabcab40632f4b2d413b1, parent: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,779 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/6c4c66d37d464581b774a8c54d7a2feb for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,791 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f for region: c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:00:08,791 DEBUG [PEWorker-5] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region c4f1eed95fcff6228404d8e96f348e3c Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-05-30 20:00:08,815 DEBUG [PEWorker-5] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-30 20:00:08,817 DEBUG [PEWorker-5] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/recovered.edits/121.seqid, newMaxSeqId=121, maxSeqId=-1 2023-05-30 20:00:08,819 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685476808818"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685476808818"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685476808818"}]},"ts":"1685476808818"} 2023-05-30 20:00:08,819 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476808818"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476808818"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476808818"}]},"ts":"1685476808818"} 2023-05-30 20:00:08,819 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476808818"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476808818"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476808818"}]},"ts":"1685476808818"} 2023-05-30 20:00:08,858 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45089] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-30 20:00:08,859 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-30 20:00:08,859 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-30 20:00:08,867 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d535d549ce6c92a858104678d8e7b2b3, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a9d413521ebbabcab40632f4b2d413b1, ASSIGN}] 2023-05-30 20:00:08,868 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d535d549ce6c92a858104678d8e7b2b3, ASSIGN 2023-05-30 20:00:08,868 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a9d413521ebbabcab40632f4b2d413b1, ASSIGN 2023-05-30 20:00:08,869 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d535d549ce6c92a858104678d8e7b2b3, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,45089,1685476783273; forceNewPlan=false, retain=false 2023-05-30 20:00:08,869 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a9d413521ebbabcab40632f4b2d413b1, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,45089,1685476783273; forceNewPlan=false, retain=false 2023-05-30 20:00:08,870 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/info/2f41887a9afe4dc18e2d8f94f6f61dcb 2023-05-30 20:00:08,882 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/table/1ae8bb59aa754896b5919b4b45592c06 2023-05-30 20:00:08,887 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/info/2f41887a9afe4dc18e2d8f94f6f61dcb as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info/2f41887a9afe4dc18e2d8f94f6f61dcb 2023-05-30 20:00:08,892 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info/2f41887a9afe4dc18e2d8f94f6f61dcb, entries=29, sequenceid=17, filesize=8.6 K 2023-05-30 20:00:08,893 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/table/1ae8bb59aa754896b5919b4b45592c06 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/table/1ae8bb59aa754896b5919b4b45592c06 2023-05-30 20:00:08,897 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/table/1ae8bb59aa754896b5919b4b45592c06, entries=4, sequenceid=17, filesize=4.8 K 2023-05-30 20:00:08,898 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 38ms, sequenceid=17, compaction requested=false 2023-05-30 20:00:08,898 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-30 20:00:09,021 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=d535d549ce6c92a858104678d8e7b2b3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:09,021 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a9d413521ebbabcab40632f4b2d413b1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:09,021 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476809020"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476809020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476809020"}]},"ts":"1685476809020"} 2023-05-30 20:00:09,021 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476809020"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476809020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476809020"}]},"ts":"1685476809020"} 2023-05-30 20:00:09,023 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure d535d549ce6c92a858104678d8e7b2b3, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 20:00:09,023 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273}] 2023-05-30 20:00:09,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:09,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9d413521ebbabcab40632f4b2d413b1, NAME => 'TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-30 20:00:09,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:00:09,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,179 INFO [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,180 DEBUG [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info 2023-05-30 20:00:09,180 DEBUG [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info 2023-05-30 20:00:09,180 INFO [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9d413521ebbabcab40632f4b2d413b1 columnFamilyName info 2023-05-30 20:00:09,189 DEBUG [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] regionserver.HStore(539): loaded hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2 2023-05-30 20:00:09,193 DEBUG [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] regionserver.HStore(539): loaded hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb 2023-05-30 20:00:09,200 DEBUG [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] regionserver.HStore(539): loaded hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-top 2023-05-30 20:00:09,200 INFO [StoreOpener-a9d413521ebbabcab40632f4b2d413b1-1] regionserver.HStore(310): Store=a9d413521ebbabcab40632f4b2d413b1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:00:09,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:09,206 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9d413521ebbabcab40632f4b2d413b1; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=823545, jitterRate=0.04719217121601105}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 20:00:09,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:09,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., pid=18, masterSystemTime=1685476809174 2023-05-30 20:00:09,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:09,208 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:09,209 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:09,209 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:00:09,209 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:09,209 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-top, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=113.5 K 2023-05-30 20:00:09,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:09,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:09,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:00:09,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d535d549ce6c92a858104678d8e7b2b3, NAME => 'TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-30 20:00:09,210 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=83, earliestPutTs=1685476794327 2023-05-30 20:00:09,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:00:09,210 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=a9d413521ebbabcab40632f4b2d413b1, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:09,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,210 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=104, earliestPutTs=1685476808442 2023-05-30 20:00:09,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,210 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476809210"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476809210"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476809210"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476809210"}]},"ts":"1685476809210"} 2023-05-30 20:00:09,211 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685476808466 2023-05-30 20:00:09,212 INFO [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,212 DEBUG [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info 2023-05-30 20:00:09,213 DEBUG [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info 2023-05-30 20:00:09,213 INFO [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d535d549ce6c92a858104678d8e7b2b3 columnFamilyName info 2023-05-30 20:00:09,214 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-30 20:00:09,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 in 189 msec 2023-05-30 20:00:09,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=a9d413521ebbabcab40632f4b2d413b1, ASSIGN in 348 msec 2023-05-30 20:00:09,222 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#37 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:09,222 DEBUG [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] regionserver.HStore(539): loaded hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-bottom 2023-05-30 20:00:09,223 INFO [StoreOpener-d535d549ce6c92a858104678d8e7b2b3-1] regionserver.HStore(310): Store=d535d549ce6c92a858104678d8e7b2b3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:00:09,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:00:09,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d535d549ce6c92a858104678d8e7b2b3; next sequenceid=122; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=748968, jitterRate=-0.04763899743556976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 20:00:09,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d535d549ce6c92a858104678d8e7b2b3: 2023-05-30 20:00:09,229 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3., pid=17, masterSystemTime=1685476809174 2023-05-30 20:00:09,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-30 20:00:09,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:00:09,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:00:09,232 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=d535d549ce6c92a858104678d8e7b2b3, regionState=OPEN, openSeqNum=122, regionLocation=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:09,232 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685476809232"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476809232"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476809232"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476809232"}]},"ts":"1685476809232"} 2023-05-30 20:00:09,235 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-30 20:00:09,236 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:00:09,236 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HStore(1912): d535d549ce6c92a858104678d8e7b2b3/info is initiating minor compaction (all files) 2023-05-30 20:00:09,236 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HRegion(2259): Starting compaction of d535d549ce6c92a858104678d8e7b2b3/info in TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:00:09,236 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-bottom] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/.tmp, totalSize=73.5 K 2023-05-30 20:00:09,237 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] compactions.Compactor(207): Compacting effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c, keycount=32, bloomtype=ROW, size=73.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1685476794327 2023-05-30 20:00:09,238 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-30 20:00:09,239 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure d535d549ce6c92a858104678d8e7b2b3, server=jenkins-hbase4.apache.org,45089,1685476783273 in 213 msec 2023-05-30 20:00:09,241 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-05-30 20:00:09,241 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=d535d549ce6c92a858104678d8e7b2b3, ASSIGN in 372 msec 2023-05-30 20:00:09,243 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c4f1eed95fcff6228404d8e96f348e3c, daughterA=d535d549ce6c92a858104678d8e7b2b3, daughterB=a9d413521ebbabcab40632f4b2d413b1 in 730 msec 2023-05-30 20:00:09,246 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/872be97e098f4443bb5e2d043f7209ce as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/872be97e098f4443bb5e2d043f7209ce 2023-05-30 20:00:09,246 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] throttle.PressureAwareThroughputController(145): d535d549ce6c92a858104678d8e7b2b3#info#compaction#38 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:09,258 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into 872be97e098f4443bb5e2d043f7209ce(size=39.8 K), total size for store is 39.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:09,258 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:09,258 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476809207; duration=0sec 2023-05-30 20:00:09,258 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:09,262 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/.tmp/info/03af2c3945f5473f9293dfeceee38674 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/03af2c3945f5473f9293dfeceee38674 2023-05-30 20:00:09,268 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in d535d549ce6c92a858104678d8e7b2b3/info of d535d549ce6c92a858104678d8e7b2b3 into 03af2c3945f5473f9293dfeceee38674(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:09,268 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for d535d549ce6c92a858104678d8e7b2b3: 2023-05-30 20:00:09,268 INFO [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3., storeName=d535d549ce6c92a858104678d8e7b2b3/info, priority=15, startTime=1685476809230; duration=0sec 2023-05-30 20:00:09,268 DEBUG [RS:0;jenkins-hbase4:45089-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:14,367 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 20:00:18,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] ipc.CallRunner(144): callId: 107 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476828510, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685476784310.c4f1eed95fcff6228404d8e96f348e3c. is not online on jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:29,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=3, created chunk count=13, reused chunk count=30, reuseRatio=69.77% 2023-05-30 20:00:29,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-30 20:00:36,790 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-30 20:00:40,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:40,542 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 20:00:40,563 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=132 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/64adffbda8a74c0189b6c932c25b2fd7 2023-05-30 20:00:40,571 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/64adffbda8a74c0189b6c932c25b2fd7 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7 2023-05-30 20:00:40,576 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7, entries=7, sequenceid=132, filesize=12.1 K 2023-05-30 20:00:40,576 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-30 20:00:40,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] ipc.CallRunner(144): callId: 140 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476850576, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:40,577 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for a9d413521ebbabcab40632f4b2d413b1 in 35ms, sequenceid=132, compaction requested=false 2023-05-30 20:00:40,577 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:50,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:50,619 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-30 20:00:50,629 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=158 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/7e8bea5d3c804d1b8118be6e8da1a721 2023-05-30 20:00:50,635 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/7e8bea5d3c804d1b8118be6e8da1a721 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721 2023-05-30 20:00:50,641 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721, entries=23, sequenceid=158, filesize=29.0 K 2023-05-30 20:00:50,642 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=4.20 KB/4304 for a9d413521ebbabcab40632f4b2d413b1 in 23ms, sequenceid=158, compaction requested=true 2023-05-30 20:00:50,642 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:50,642 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:50,642 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:50,643 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82797 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:00:50,643 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:00:50,643 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:50,643 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/872be97e098f4443bb5e2d043f7209ce, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=80.9 K 2023-05-30 20:00:50,644 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 872be97e098f4443bb5e2d043f7209ce, keycount=33, bloomtype=ROW, size=39.8 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1685476806428 2023-05-30 20:00:50,644 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 64adffbda8a74c0189b6c932c25b2fd7, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=132, earliestPutTs=1685476838533 2023-05-30 20:00:50,644 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 7e8bea5d3c804d1b8118be6e8da1a721, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=158, earliestPutTs=1685476840543 2023-05-30 20:00:50,654 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#41 average throughput is 64.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:50,668 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/893719836d0648b29dc52447986b3514 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/893719836d0648b29dc52447986b3514 2023-05-30 20:00:50,673 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into 893719836d0648b29dc52447986b3514(size=71.6 K), total size for store is 71.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:50,673 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:50,673 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476850642; duration=0sec 2023-05-30 20:00:50,674 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:52,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:52,628 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 20:00:52,638 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=169 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/abeac2ab238d403e8725a30fbd1e7f50 2023-05-30 20:00:52,645 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/abeac2ab238d403e8725a30fbd1e7f50 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50 2023-05-30 20:00:52,652 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50, entries=7, sequenceid=169, filesize=12.1 K 2023-05-30 20:00:52,653 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=21.02 KB/21520 for a9d413521ebbabcab40632f4b2d413b1 in 25ms, sequenceid=169, compaction requested=false 2023-05-30 20:00:52,653 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:52,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:52,653 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=22.07 KB heapSize=23.88 KB 2023-05-30 20:00:52,670 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=193 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/60058b983b084ea4a2b0df78b22da11b 2023-05-30 20:00:52,676 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/60058b983b084ea4a2b0df78b22da11b as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b 2023-05-30 20:00:52,682 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b, entries=21, sequenceid=193, filesize=26.9 K 2023-05-30 20:00:52,683 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22596, heapSize ~23.86 KB/24432, currentSize=8.41 KB/8608 for a9d413521ebbabcab40632f4b2d413b1 in 29ms, sequenceid=193, compaction requested=true 2023-05-30 20:00:52,683 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:52,683 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:52,683 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:52,684 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 113224 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:00:52,684 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:00:52,684 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:52,684 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/893719836d0648b29dc52447986b3514, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=110.6 K 2023-05-30 20:00:52,685 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 893719836d0648b29dc52447986b3514, keycount=63, bloomtype=ROW, size=71.6 K, encoding=NONE, compression=NONE, seqNum=158, earliestPutTs=1685476806428 2023-05-30 20:00:52,685 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting abeac2ab238d403e8725a30fbd1e7f50, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=169, earliestPutTs=1685476850620 2023-05-30 20:00:52,686 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 60058b983b084ea4a2b0df78b22da11b, keycount=21, bloomtype=ROW, size=26.9 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1685476852629 2023-05-30 20:00:52,698 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#44 average throughput is 46.69 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:52,718 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/6d488634f17d4d41b2b7c49e61e6b737 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/6d488634f17d4d41b2b7c49e61e6b737 2023-05-30 20:00:52,723 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into 6d488634f17d4d41b2b7c49e61e6b737(size=101.2 K), total size for store is 101.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:52,724 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:52,724 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476852683; duration=0sec 2023-05-30 20:00:52,724 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:54,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:54,665 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-30 20:00:54,677 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=206 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/25f6b90fe4df42f1908affb082233d1e 2023-05-30 20:00:54,683 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/25f6b90fe4df42f1908affb082233d1e as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e 2023-05-30 20:00:54,688 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e, entries=9, sequenceid=206, filesize=14.2 K 2023-05-30 20:00:54,689 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=19.96 KB/20444 for a9d413521ebbabcab40632f4b2d413b1 in 25ms, sequenceid=206, compaction requested=false 2023-05-30 20:00:54,689 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:54,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:00:54,690 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-30 20:00:54,699 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=229 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/49a87ed2a1524adaaed3c2754f4cb1a4 2023-05-30 20:00:54,701 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-30 20:00:54,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] ipc.CallRunner(144): callId: 209 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476864701, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:00:54,705 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/49a87ed2a1524adaaed3c2754f4cb1a4 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4 2023-05-30 20:00:54,709 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4, entries=20, sequenceid=229, filesize=25.8 K 2023-05-30 20:00:54,710 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for a9d413521ebbabcab40632f4b2d413b1 in 20ms, sequenceid=229, compaction requested=true 2023-05-30 20:00:54,710 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:54,710 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:00:54,710 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:00:54,711 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 144592 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:00:54,711 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:00:54,712 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:00:54,712 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/6d488634f17d4d41b2b7c49e61e6b737, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=141.2 K 2023-05-30 20:00:54,712 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 6d488634f17d4d41b2b7c49e61e6b737, keycount=91, bloomtype=ROW, size=101.2 K, encoding=NONE, compression=NONE, seqNum=193, earliestPutTs=1685476806428 2023-05-30 20:00:54,712 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 25f6b90fe4df42f1908affb082233d1e, keycount=9, bloomtype=ROW, size=14.2 K, encoding=NONE, compression=NONE, seqNum=206, earliestPutTs=1685476852654 2023-05-30 20:00:54,713 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 49a87ed2a1524adaaed3c2754f4cb1a4, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1685476854665 2023-05-30 20:00:54,722 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#47 average throughput is 123.14 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:00:54,733 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/f4f8434cf51a4626a04ece4f68faf1bf as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/f4f8434cf51a4626a04ece4f68faf1bf 2023-05-30 20:00:54,738 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into f4f8434cf51a4626a04ece4f68faf1bf(size=131.9 K), total size for store is 131.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:00:54,738 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:00:54,738 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476854710; duration=0sec 2023-05-30 20:00:54,738 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:04,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:04,784 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-30 20:01:04,796 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=243 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/832c4a71fd9b4f588888d5a501a9be57 2023-05-30 20:01:04,802 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/832c4a71fd9b4f588888d5a501a9be57 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57 2023-05-30 20:01:04,807 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57, entries=10, sequenceid=243, filesize=15.3 K 2023-05-30 20:01:04,808 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=1.05 KB/1076 for a9d413521ebbabcab40632f4b2d413b1 in 24ms, sequenceid=243, compaction requested=false 2023-05-30 20:01:04,808 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:06,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:06,792 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 20:01:06,803 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=253 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/3326a724275b4645b3eeaaa3947c7f03 2023-05-30 20:01:06,808 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/3326a724275b4645b3eeaaa3947c7f03 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03 2023-05-30 20:01:06,814 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03, entries=7, sequenceid=253, filesize=12.1 K 2023-05-30 20:01:06,814 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for a9d413521ebbabcab40632f4b2d413b1 in 22ms, sequenceid=253, compaction requested=true 2023-05-30 20:01:06,815 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:06,815 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:06,815 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:01:06,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:06,816 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-30 20:01:06,816 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 163136 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:01:06,816 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:01:06,816 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:06,816 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/f4f8434cf51a4626a04ece4f68faf1bf, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=159.3 K 2023-05-30 20:01:06,817 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting f4f8434cf51a4626a04ece4f68faf1bf, keycount=120, bloomtype=ROW, size=131.9 K, encoding=NONE, compression=NONE, seqNum=229, earliestPutTs=1685476806428 2023-05-30 20:01:06,817 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 832c4a71fd9b4f588888d5a501a9be57, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=243, earliestPutTs=1685476854690 2023-05-30 20:01:06,818 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 3326a724275b4645b3eeaaa3947c7f03, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1685476864785 2023-05-30 20:01:06,833 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=276 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/e30c8269cd314479ad3c0cec5ec0f773 2023-05-30 20:01:06,834 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#51 average throughput is 140.58 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:01:06,839 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/e30c8269cd314479ad3c0cec5ec0f773 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773 2023-05-30 20:01:06,846 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773, entries=20, sequenceid=276, filesize=25.8 K 2023-05-30 20:01:06,847 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for a9d413521ebbabcab40632f4b2d413b1 in 31ms, sequenceid=276, compaction requested=false 2023-05-30 20:01:06,848 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:07,250 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/46c50bcb240c4862affe92c520da3e78 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/46c50bcb240c4862affe92c520da3e78 2023-05-30 20:01:07,256 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into 46c50bcb240c4862affe92c520da3e78(size=150.0 K), total size for store is 175.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:01:07,256 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:07,256 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476866815; duration=0sec 2023-05-30 20:01:07,256 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:08,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:08,824 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-30 20:01:08,841 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=287 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/90709275fa84449ca375c3205fc32215 2023-05-30 20:01:08,848 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/90709275fa84449ca375c3205fc32215 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215 2023-05-30 20:01:08,853 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215, entries=7, sequenceid=287, filesize=12.1 K 2023-05-30 20:01:08,854 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for a9d413521ebbabcab40632f4b2d413b1 in 30ms, sequenceid=287, compaction requested=true 2023-05-30 20:01:08,854 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:08,854 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:08,854 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:01:08,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:08,855 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-30 20:01:08,856 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 192463 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:01:08,856 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:01:08,856 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:08,856 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/46c50bcb240c4862affe92c520da3e78, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=188.0 K 2023-05-30 20:01:08,859 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 46c50bcb240c4862affe92c520da3e78, keycount=137, bloomtype=ROW, size=150.0 K, encoding=NONE, compression=NONE, seqNum=253, earliestPutTs=1685476806428 2023-05-30 20:01:08,859 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting e30c8269cd314479ad3c0cec5ec0f773, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=276, earliestPutTs=1685476866793 2023-05-30 20:01:08,860 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 90709275fa84449ca375c3205fc32215, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=287, earliestPutTs=1685476866816 2023-05-30 20:01:08,865 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45089] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-30 20:01:08,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45089] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:60136 deadline: 1685476878865, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=a9d413521ebbabcab40632f4b2d413b1, server=jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:01:08,882 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=312 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/e2f946032f0144fabc2dc37990ae21e1 2023-05-30 20:01:08,889 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#54 average throughput is 84.14 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:01:08,891 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/e2f946032f0144fabc2dc37990ae21e1 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1 2023-05-30 20:01:08,896 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1, entries=22, sequenceid=312, filesize=27.9 K 2023-05-30 20:01:08,897 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=7.36 KB/7532 for a9d413521ebbabcab40632f4b2d413b1 in 42ms, sequenceid=312, compaction requested=false 2023-05-30 20:01:08,897 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:08,912 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/e34e43b011de4c7c867571ed81c4c28e as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e34e43b011de4c7c867571ed81c4c28e 2023-05-30 20:01:08,917 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into e34e43b011de4c7c867571ed81c4c28e(size=178.5 K), total size for store is 206.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:01:08,917 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:08,917 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476868854; duration=0sec 2023-05-30 20:01:08,917 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:18,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45089] regionserver.HRegion(9158): Flush requested on a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:18,890 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-30 20:01:18,899 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=324 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/9edf2c6334b74f9eb420e5eb0b22db7b 2023-05-30 20:01:18,905 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/9edf2c6334b74f9eb420e5eb0b22db7b as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b 2023-05-30 20:01:18,911 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b, entries=8, sequenceid=324, filesize=13.2 K 2023-05-30 20:01:18,911 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=1.05 KB/1076 for a9d413521ebbabcab40632f4b2d413b1 in 21ms, sequenceid=324, compaction requested=true 2023-05-30 20:01:18,912 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:18,912 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:18,912 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-30 20:01:18,913 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 224943 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-30 20:01:18,913 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1912): a9d413521ebbabcab40632f4b2d413b1/info is initiating minor compaction (all files) 2023-05-30 20:01:18,913 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of a9d413521ebbabcab40632f4b2d413b1/info in TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:18,913 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e34e43b011de4c7c867571ed81c4c28e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b] into tmpdir=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp, totalSize=219.7 K 2023-05-30 20:01:18,913 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting e34e43b011de4c7c867571ed81c4c28e, keycount=164, bloomtype=ROW, size=178.5 K, encoding=NONE, compression=NONE, seqNum=287, earliestPutTs=1685476806428 2023-05-30 20:01:18,914 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting e2f946032f0144fabc2dc37990ae21e1, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=312, earliestPutTs=1685476868825 2023-05-30 20:01:18,914 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] compactions.Compactor(207): Compacting 9edf2c6334b74f9eb420e5eb0b22db7b, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=324, earliestPutTs=1685476868855 2023-05-30 20:01:18,923 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] throttle.PressureAwareThroughputController(145): a9d413521ebbabcab40632f4b2d413b1#info#compaction#56 average throughput is 99.54 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-30 20:01:18,933 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/1658c1c0e38340d9b652df2412f621cb as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/1658c1c0e38340d9b652df2412f621cb 2023-05-30 20:01:18,939 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in a9d413521ebbabcab40632f4b2d413b1/info of a9d413521ebbabcab40632f4b2d413b1 into 1658c1c0e38340d9b652df2412f621cb(size=210.3 K), total size for store is 210.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-30 20:01:18,939 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:18,939 INFO [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1., storeName=a9d413521ebbabcab40632f4b2d413b1/info, priority=13, startTime=1685476878912; duration=0sec 2023-05-30 20:01:18,939 DEBUG [RS:0;jenkins-hbase4:45089-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-30 20:01:20,892 INFO [Listener at localhost/40695] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-30 20:01:20,906 INFO [Listener at localhost/40695] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476783648 with entries=311, filesize=307.65 KB; new WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880892 2023-05-30 20:01:20,907 DEBUG [Listener at localhost/40695] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33913,DS-bcad548f-636f-46dc-97bf-f8b4d6111790,DISK], DatanodeInfoWithStorage[127.0.0.1:39865,DS-7e8f37fb-4d42-412f-9653-a65b64169a89,DISK]] 2023-05-30 20:01:20,907 DEBUG [Listener at localhost/40695] wal.AbstractFSWAL(716): hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476783648 is not closed yet, will try archiving it next time 2023-05-30 20:01:20,912 DEBUG [Listener at localhost/40695] regionserver.HRegion(2446): Flush status journal for d535d549ce6c92a858104678d8e7b2b3: 2023-05-30 20:01:20,912 INFO [Listener at localhost/40695] regionserver.HRegion(2745): Flushing 1e5f39411632247dfb17e864603d997c 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-30 20:01:20,924 INFO [Listener at localhost/40695] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/.tmp/info/ab99ffb9044f49fba204272a199a6151 2023-05-30 20:01:20,930 DEBUG [Listener at localhost/40695] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/.tmp/info/ab99ffb9044f49fba204272a199a6151 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/info/ab99ffb9044f49fba204272a199a6151 2023-05-30 20:01:20,934 INFO [Listener at localhost/40695] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/info/ab99ffb9044f49fba204272a199a6151, entries=2, sequenceid=6, filesize=4.8 K 2023-05-30 20:01:20,936 INFO [Listener at localhost/40695] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1e5f39411632247dfb17e864603d997c in 24ms, sequenceid=6, compaction requested=false 2023-05-30 20:01:20,936 DEBUG [Listener at localhost/40695] regionserver.HRegion(2446): Flush status journal for 1e5f39411632247dfb17e864603d997c: 2023-05-30 20:01:20,936 INFO [Listener at localhost/40695] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-30 20:01:20,947 INFO [Listener at localhost/40695] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/info/439bce8f1c994fe0acab888f6570d882 2023-05-30 20:01:20,951 DEBUG [Listener at localhost/40695] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/.tmp/info/439bce8f1c994fe0acab888f6570d882 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info/439bce8f1c994fe0acab888f6570d882 2023-05-30 20:01:20,956 INFO [Listener at localhost/40695] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/info/439bce8f1c994fe0acab888f6570d882, entries=16, sequenceid=24, filesize=7.0 K 2023-05-30 20:01:20,956 INFO [Listener at localhost/40695] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 20ms, sequenceid=24, compaction requested=false 2023-05-30 20:01:20,956 DEBUG [Listener at localhost/40695] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-30 20:01:20,957 INFO [Listener at localhost/40695] regionserver.HRegion(2745): Flushing a9d413521ebbabcab40632f4b2d413b1 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-30 20:01:20,964 INFO [Listener at localhost/40695] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=329 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/56aec088a9bb461b910ef2c5ced01b61 2023-05-30 20:01:20,968 DEBUG [Listener at localhost/40695] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/.tmp/info/56aec088a9bb461b910ef2c5ced01b61 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/56aec088a9bb461b910ef2c5ced01b61 2023-05-30 20:01:20,973 INFO [Listener at localhost/40695] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/56aec088a9bb461b910ef2c5ced01b61, entries=1, sequenceid=329, filesize=5.8 K 2023-05-30 20:01:20,974 INFO [Listener at localhost/40695] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for a9d413521ebbabcab40632f4b2d413b1 in 17ms, sequenceid=329, compaction requested=false 2023-05-30 20:01:20,974 DEBUG [Listener at localhost/40695] regionserver.HRegion(2446): Flush status journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:20,981 INFO [Listener at localhost/40695] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880892 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880974 2023-05-30 20:01:20,981 DEBUG [Listener at localhost/40695] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33913,DS-bcad548f-636f-46dc-97bf-f8b4d6111790,DISK], DatanodeInfoWithStorage[127.0.0.1:39865,DS-7e8f37fb-4d42-412f-9653-a65b64169a89,DISK]] 2023-05-30 20:01:20,982 DEBUG [Listener at localhost/40695] wal.AbstractFSWAL(716): hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880892 is not closed yet, will try archiving it next time 2023-05-30 20:01:20,982 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476783648 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476783648 2023-05-30 20:01:20,984 INFO [Listener at localhost/40695] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-30 20:01:20,987 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880892 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs/jenkins-hbase4.apache.org%2C45089%2C1685476783273.1685476880892 2023-05-30 20:01:21,084 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 20:01:21,084 INFO [Listener at localhost/40695] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-30 20:01:21,084 DEBUG [Listener at localhost/40695] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a82f05b to 127.0.0.1:49181 2023-05-30 20:01:21,084 DEBUG [Listener at localhost/40695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:21,084 DEBUG [Listener at localhost/40695] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 20:01:21,084 DEBUG [Listener at localhost/40695] util.JVMClusterUtil(257): Found active master hash=908349761, stopped=false 2023-05-30 20:01:21,085 INFO [Listener at localhost/40695] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 20:01:21,086 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:21,086 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:21,087 INFO [Listener at localhost/40695] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 20:01:21,087 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:21,087 DEBUG [Listener at localhost/40695] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16d79fc6 to 127.0.0.1:49181 2023-05-30 20:01:21,087 DEBUG [Listener at localhost/40695] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:21,087 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:21,088 INFO [Listener at localhost/40695] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,45089,1685476783273' ***** 2023-05-30 20:01:21,087 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:21,088 INFO [Listener at localhost/40695] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 20:01:21,088 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(3303): Received CLOSE for d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(3303): Received CLOSE for 1e5f39411632247dfb17e864603d997c 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(3303): Received CLOSE for a9d413521ebbabcab40632f4b2d413b1 2023-05-30 20:01:21,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d535d549ce6c92a858104678d8e7b2b3, disabling compactions & flushes 2023-05-30 20:01:21,088 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:01:21,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:01:21,089 DEBUG [RS:0;jenkins-hbase4:45089] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4064231c to 127.0.0.1:49181 2023-05-30 20:01:21,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:01:21,089 DEBUG [RS:0;jenkins-hbase4:45089] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:21,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. after waiting 0 ms 2023-05-30 20:01:21,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:01:21,089 INFO [RS:0;jenkins-hbase4:45089] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 20:01:21,089 INFO [RS:0;jenkins-hbase4:45089] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 20:01:21,089 INFO [RS:0;jenkins-hbase4:45089] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 20:01:21,089 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 20:01:21,089 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-30 20:01:21,090 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1478): Online Regions={d535d549ce6c92a858104678d8e7b2b3=TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3., 1e5f39411632247dfb17e864603d997c=hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c., 1588230740=hbase:meta,,1.1588230740, a9d413521ebbabcab40632f4b2d413b1=TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.} 2023-05-30 20:01:21,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 20:01:21,090 DEBUG [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1504): Waiting on 1588230740, 1e5f39411632247dfb17e864603d997c, a9d413521ebbabcab40632f4b2d413b1, d535d549ce6c92a858104678d8e7b2b3 2023-05-30 20:01:21,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 20:01:21,090 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-bottom] to archive 2023-05-30 20:01:21,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 20:01:21,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 20:01:21,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 20:01:21,093 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-30 20:01:21,096 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:01:21,100 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-30 20:01:21,100 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 20:01:21,101 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 20:01:21,101 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 20:01:21,101 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-30 20:01:21,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/d535d549ce6c92a858104678d8e7b2b3/recovered.edits/126.seqid, newMaxSeqId=126, maxSeqId=121 2023-05-30 20:01:21,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d535d549ce6c92a858104678d8e7b2b3: 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685476808511.d535d549ce6c92a858104678d8e7b2b3. 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1e5f39411632247dfb17e864603d997c, disabling compactions & flushes 2023-05-30 20:01:21,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. after waiting 0 ms 2023-05-30 20:01:21,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 20:01:21,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/hbase/namespace/1e5f39411632247dfb17e864603d997c/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-30 20:01:21,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1e5f39411632247dfb17e864603d997c: 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685476783801.1e5f39411632247dfb17e864603d997c. 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9d413521ebbabcab40632f4b2d413b1, disabling compactions & flushes 2023-05-30 20:01:21,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. after waiting 0 ms 2023-05-30 20:01:21,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:21,116 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c->hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/c4f1eed95fcff6228404d8e96f348e3c/info/effaf41edca5442284b8b2ec7cf0639f-top, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/872be97e098f4443bb5e2d043f7209ce, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/893719836d0648b29dc52447986b3514, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/6d488634f17d4d41b2b7c49e61e6b737, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/f4f8434cf51a4626a04ece4f68faf1bf, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/46c50bcb240c4862affe92c520da3e78, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e34e43b011de4c7c867571ed81c4c28e, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b] to archive 2023-05-30 20:01:21,116 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-30 20:01:21,118 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/effaf41edca5442284b8b2ec7cf0639f.c4f1eed95fcff6228404d8e96f348e3c 2023-05-30 20:01:21,119 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-6c4c66d37d464581b774a8c54d7a2feb 2023-05-30 20:01:21,120 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/872be97e098f4443bb5e2d043f7209ce to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/872be97e098f4443bb5e2d043f7209ce 2023-05-30 20:01:21,121 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/TestLogRolling-testLogRolling=c4f1eed95fcff6228404d8e96f348e3c-1c727c7c222b4d3c8334fef29fe361b2 2023-05-30 20:01:21,123 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/64adffbda8a74c0189b6c932c25b2fd7 2023-05-30 20:01:21,124 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/893719836d0648b29dc52447986b3514 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/893719836d0648b29dc52447986b3514 2023-05-30 20:01:21,125 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/7e8bea5d3c804d1b8118be6e8da1a721 2023-05-30 20:01:21,126 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/abeac2ab238d403e8725a30fbd1e7f50 2023-05-30 20:01:21,127 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/6d488634f17d4d41b2b7c49e61e6b737 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/6d488634f17d4d41b2b7c49e61e6b737 2023-05-30 20:01:21,128 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/60058b983b084ea4a2b0df78b22da11b 2023-05-30 20:01:21,129 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/25f6b90fe4df42f1908affb082233d1e 2023-05-30 20:01:21,130 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/f4f8434cf51a4626a04ece4f68faf1bf to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/f4f8434cf51a4626a04ece4f68faf1bf 2023-05-30 20:01:21,131 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/49a87ed2a1524adaaed3c2754f4cb1a4 2023-05-30 20:01:21,132 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/832c4a71fd9b4f588888d5a501a9be57 2023-05-30 20:01:21,133 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/46c50bcb240c4862affe92c520da3e78 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/46c50bcb240c4862affe92c520da3e78 2023-05-30 20:01:21,134 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/3326a724275b4645b3eeaaa3947c7f03 2023-05-30 20:01:21,135 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e30c8269cd314479ad3c0cec5ec0f773 2023-05-30 20:01:21,136 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e34e43b011de4c7c867571ed81c4c28e to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e34e43b011de4c7c867571ed81c4c28e 2023-05-30 20:01:21,137 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/90709275fa84449ca375c3205fc32215 2023-05-30 20:01:21,138 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1 to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/e2f946032f0144fabc2dc37990ae21e1 2023-05-30 20:01:21,139 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b to hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/archive/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/info/9edf2c6334b74f9eb420e5eb0b22db7b 2023-05-30 20:01:21,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/data/default/TestLogRolling-testLogRolling/a9d413521ebbabcab40632f4b2d413b1/recovered.edits/332.seqid, newMaxSeqId=332, maxSeqId=121 2023-05-30 20:01:21,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:21,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9d413521ebbabcab40632f4b2d413b1: 2023-05-30 20:01:21,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685476808511.a9d413521ebbabcab40632f4b2d413b1. 2023-05-30 20:01:21,290 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45089,1685476783273; all regions closed. 2023-05-30 20:01:21,291 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:01:21,296 DEBUG [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs 2023-05-30 20:01:21,296 INFO [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C45089%2C1685476783273.meta:.meta(num 1685476783756) 2023-05-30 20:01:21,296 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/WALs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:01:21,301 DEBUG [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/oldWALs 2023-05-30 20:01:21,301 INFO [RS:0;jenkins-hbase4:45089] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C45089%2C1685476783273:(num 1685476880974) 2023-05-30 20:01:21,301 DEBUG [RS:0;jenkins-hbase4:45089] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:21,301 INFO [RS:0;jenkins-hbase4:45089] regionserver.LeaseManager(133): Closed leases 2023-05-30 20:01:21,301 INFO [RS:0;jenkins-hbase4:45089] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-30 20:01:21,301 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 20:01:21,302 INFO [RS:0;jenkins-hbase4:45089] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45089 2023-05-30 20:01:21,305 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45089,1685476783273 2023-05-30 20:01:21,305 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 20:01:21,305 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 20:01:21,306 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45089,1685476783273] 2023-05-30 20:01:21,306 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45089,1685476783273; numProcessing=1 2023-05-30 20:01:21,308 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45089,1685476783273 already deleted, retry=false 2023-05-30 20:01:21,308 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45089,1685476783273 expired; onlineServers=0 2023-05-30 20:01:21,308 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,41009,1685476783234' ***** 2023-05-30 20:01:21,308 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 20:01:21,309 DEBUG [M:0;jenkins-hbase4:41009] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@321bd35d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 20:01:21,309 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 20:01:21,309 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41009,1685476783234; all regions closed. 2023-05-30 20:01:21,309 DEBUG [M:0;jenkins-hbase4:41009] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:21,309 DEBUG [M:0;jenkins-hbase4:41009] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 20:01:21,309 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 20:01:21,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476783403] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476783403,5,FailOnTimeoutGroup] 2023-05-30 20:01:21,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476783403] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476783403,5,FailOnTimeoutGroup] 2023-05-30 20:01:21,309 DEBUG [M:0;jenkins-hbase4:41009] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 20:01:21,310 INFO [M:0;jenkins-hbase4:41009] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 20:01:21,310 INFO [M:0;jenkins-hbase4:41009] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 20:01:21,310 INFO [M:0;jenkins-hbase4:41009] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 20:01:21,310 DEBUG [M:0;jenkins-hbase4:41009] master.HMaster(1512): Stopping service threads 2023-05-30 20:01:21,310 INFO [M:0;jenkins-hbase4:41009] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 20:01:21,311 ERROR [M:0;jenkins-hbase4:41009] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-30 20:01:21,311 INFO [M:0;jenkins-hbase4:41009] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 20:01:21,311 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 20:01:21,311 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 20:01:21,311 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:21,311 DEBUG [M:0;jenkins-hbase4:41009] zookeeper.ZKUtil(398): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 20:01:21,311 WARN [M:0;jenkins-hbase4:41009] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 20:01:21,311 INFO [M:0;jenkins-hbase4:41009] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 20:01:21,311 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 20:01:21,312 INFO [M:0;jenkins-hbase4:41009] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 20:01:21,312 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 20:01:21,312 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:21,312 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:21,312 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 20:01:21,312 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:21,312 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.71 KB heapSize=78.42 KB 2023-05-30 20:01:21,321 INFO [M:0;jenkins-hbase4:41009] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.71 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd6ac0ba30b848a6981236ef7739cd50 2023-05-30 20:01:21,326 INFO [M:0;jenkins-hbase4:41009] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd6ac0ba30b848a6981236ef7739cd50 2023-05-30 20:01:21,327 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd6ac0ba30b848a6981236ef7739cd50 as hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd6ac0ba30b848a6981236ef7739cd50 2023-05-30 20:01:21,331 INFO [M:0;jenkins-hbase4:41009] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd6ac0ba30b848a6981236ef7739cd50 2023-05-30 20:01:21,331 INFO [M:0;jenkins-hbase4:41009] regionserver.HStore(1080): Added hdfs://localhost:40151/user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd6ac0ba30b848a6981236ef7739cd50, entries=18, sequenceid=160, filesize=6.9 K 2023-05-30 20:01:21,332 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegion(2948): Finished flush of dataSize ~64.71 KB/66268, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=160, compaction requested=false 2023-05-30 20:01:21,333 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:21,333 DEBUG [M:0;jenkins-hbase4:41009] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 20:01:21,333 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/55711a4b-9391-8d28-8cf5-3c169e1b1616/MasterData/WALs/jenkins-hbase4.apache.org,41009,1685476783234 2023-05-30 20:01:21,337 INFO [M:0;jenkins-hbase4:41009] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 20:01:21,337 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 20:01:21,337 INFO [M:0;jenkins-hbase4:41009] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41009 2023-05-30 20:01:21,341 DEBUG [M:0;jenkins-hbase4:41009] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41009,1685476783234 already deleted, retry=false 2023-05-30 20:01:21,406 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:21,406 INFO [RS:0;jenkins-hbase4:45089] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45089,1685476783273; zookeeper connection closed. 2023-05-30 20:01:21,406 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): regionserver:45089-0x1007dad37730001, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:21,407 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@30513685] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@30513685 2023-05-30 20:01:21,407 INFO [Listener at localhost/40695] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-30 20:01:21,506 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:21,506 INFO [M:0;jenkins-hbase4:41009] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41009,1685476783234; zookeeper connection closed. 2023-05-30 20:01:21,506 DEBUG [Listener at localhost/40695-EventThread] zookeeper.ZKWatcher(600): master:41009-0x1007dad37730000, quorum=127.0.0.1:49181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:21,508 WARN [Listener at localhost/40695] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 20:01:21,512 INFO [Listener at localhost/40695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:21,527 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-30 20:01:21,616 WARN [BP-2037815513-172.31.14.131-1685476782699 heartbeating to localhost/127.0.0.1:40151] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 20:01:21,616 WARN [BP-2037815513-172.31.14.131-1685476782699 heartbeating to localhost/127.0.0.1:40151] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2037815513-172.31.14.131-1685476782699 (Datanode Uuid 5a352b9d-0079-4b86-b44a-0a59573525b9) service to localhost/127.0.0.1:40151 2023-05-30 20:01:21,617 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/dfs/data/data3/current/BP-2037815513-172.31.14.131-1685476782699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:21,618 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/dfs/data/data4/current/BP-2037815513-172.31.14.131-1685476782699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:21,619 WARN [Listener at localhost/40695] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 20:01:21,623 INFO [Listener at localhost/40695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:21,727 WARN [BP-2037815513-172.31.14.131-1685476782699 heartbeating to localhost/127.0.0.1:40151] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 20:01:21,727 WARN [BP-2037815513-172.31.14.131-1685476782699 heartbeating to localhost/127.0.0.1:40151] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2037815513-172.31.14.131-1685476782699 (Datanode Uuid 89e79c3a-f42a-4064-8b85-2a202d55775f) service to localhost/127.0.0.1:40151 2023-05-30 20:01:21,728 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/dfs/data/data1/current/BP-2037815513-172.31.14.131-1685476782699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:21,728 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/cluster_fb01a824-2968-0912-f168-2161a270a9e8/dfs/data/data2/current/BP-2037815513-172.31.14.131-1685476782699] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:21,741 INFO [Listener at localhost/40695] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:21,857 INFO [Listener at localhost/40695] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 20:01:21,888 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 20:01:21,898 INFO [Listener at localhost/40695] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 93) - Thread LEAK? -, OpenFileDescriptor=540 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=34 (was 28) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 171), AvailableMemoryMB=2461 (was 2707) 2023-05-30 20:01:21,906 INFO [Listener at localhost/40695] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=540, MaxFileDescriptor=60000, SystemLoadAverage=34, ProcessCount=170, AvailableMemoryMB=2461 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/hadoop.log.dir so I do NOT create it in target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/732e287d-ee9d-f2a3-3166-967f7247f0ca/hadoop.tmp.dir so I do NOT create it in target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e, deleteOnExit=true 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/test.cache.data in system properties and HBase conf 2023-05-30 20:01:21,907 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/hadoop.tmp.dir in system properties and HBase conf 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/hadoop.log.dir in system properties and HBase conf 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-30 20:01:21,908 DEBUG [Listener at localhost/40695] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-30 20:01:21,908 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/nfs.dump.dir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/java.io.tmpdir in system properties and HBase conf 2023-05-30 20:01:21,909 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-30 20:01:21,910 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-30 20:01:21,910 INFO [Listener at localhost/40695] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-30 20:01:21,911 WARN [Listener at localhost/40695] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 20:01:21,914 WARN [Listener at localhost/40695] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 20:01:21,914 WARN [Listener at localhost/40695] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 20:01:21,952 WARN [Listener at localhost/40695] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 20:01:21,953 INFO [Listener at localhost/40695] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 20:01:21,958 INFO [Listener at localhost/40695] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/java.io.tmpdir/Jetty_localhost_41787_hdfs____tlp8xv/webapp 2023-05-30 20:01:22,048 INFO [Listener at localhost/40695] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41787 2023-05-30 20:01:22,049 WARN [Listener at localhost/40695] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-30 20:01:22,052 WARN [Listener at localhost/40695] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-30 20:01:22,052 WARN [Listener at localhost/40695] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-30 20:01:22,088 WARN [Listener at localhost/37877] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 20:01:22,103 WARN [Listener at localhost/37877] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 20:01:22,105 WARN [Listener at localhost/37877] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 20:01:22,106 INFO [Listener at localhost/37877] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 20:01:22,110 INFO [Listener at localhost/37877] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/java.io.tmpdir/Jetty_localhost_33437_datanode____63vehr/webapp 2023-05-30 20:01:22,199 INFO [Listener at localhost/37877] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33437 2023-05-30 20:01:22,204 WARN [Listener at localhost/38461] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 20:01:22,217 WARN [Listener at localhost/38461] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-30 20:01:22,218 WARN [Listener at localhost/38461] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-30 20:01:22,219 INFO [Listener at localhost/38461] log.Slf4jLog(67): jetty-6.1.26 2023-05-30 20:01:22,222 INFO [Listener at localhost/38461] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/java.io.tmpdir/Jetty_localhost_37663_datanode____giwet2/webapp 2023-05-30 20:01:22,308 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75ea5300986bf26b: Processing first storage report for DS-844db66f-03b6-474d-8763-caec97ded0a6 from datanode 4cdc0625-ac6d-4dfb-a1cb-c681f20aaec9 2023-05-30 20:01:22,308 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75ea5300986bf26b: from storage DS-844db66f-03b6-474d-8763-caec97ded0a6 node DatanodeRegistration(127.0.0.1:46215, datanodeUuid=4cdc0625-ac6d-4dfb-a1cb-c681f20aaec9, infoPort=34207, infoSecurePort=0, ipcPort=38461, storageInfo=lv=-57;cid=testClusterID;nsid=1398906638;c=1685476881917), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-30 20:01:22,308 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75ea5300986bf26b: Processing first storage report for DS-9d30f834-1b69-4604-9afd-007632d48345 from datanode 4cdc0625-ac6d-4dfb-a1cb-c681f20aaec9 2023-05-30 20:01:22,309 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75ea5300986bf26b: from storage DS-9d30f834-1b69-4604-9afd-007632d48345 node DatanodeRegistration(127.0.0.1:46215, datanodeUuid=4cdc0625-ac6d-4dfb-a1cb-c681f20aaec9, infoPort=34207, infoSecurePort=0, ipcPort=38461, storageInfo=lv=-57;cid=testClusterID;nsid=1398906638;c=1685476881917), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 20:01:22,317 INFO [Listener at localhost/38461] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37663 2023-05-30 20:01:22,322 WARN [Listener at localhost/45051] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-30 20:01:22,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20fb5f1321c3bedb: Processing first storage report for DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f from datanode 860d4138-19d4-4714-bc5f-edc50b68934e 2023-05-30 20:01:22,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20fb5f1321c3bedb: from storage DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f node DatanodeRegistration(127.0.0.1:41183, datanodeUuid=860d4138-19d4-4714-bc5f-edc50b68934e, infoPort=36443, infoSecurePort=0, ipcPort=45051, storageInfo=lv=-57;cid=testClusterID;nsid=1398906638;c=1685476881917), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 20:01:22,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x20fb5f1321c3bedb: Processing first storage report for DS-5b007941-f808-4796-b27b-b198843c7e73 from datanode 860d4138-19d4-4714-bc5f-edc50b68934e 2023-05-30 20:01:22,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x20fb5f1321c3bedb: from storage DS-5b007941-f808-4796-b27b-b198843c7e73 node DatanodeRegistration(127.0.0.1:41183, datanodeUuid=860d4138-19d4-4714-bc5f-edc50b68934e, infoPort=36443, infoSecurePort=0, ipcPort=45051, storageInfo=lv=-57;cid=testClusterID;nsid=1398906638;c=1685476881917), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-30 20:01:22,430 DEBUG [Listener at localhost/45051] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061 2023-05-30 20:01:22,432 INFO [Listener at localhost/45051] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/zookeeper_0, clientPort=56671, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-30 20:01:22,432 INFO [Listener at localhost/45051] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56671 2023-05-30 20:01:22,433 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,433 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,447 INFO [Listener at localhost/45051] util.FSUtils(471): Created version file at hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567 with version=8 2023-05-30 20:01:22,447 INFO [Listener at localhost/45051] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43381/user/jenkins/test-data/4779f818-f3a5-d4ff-99df-244e7a4f258f/hbase-staging 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 20:01:22,449 INFO [Listener at localhost/45051] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 20:01:22,450 INFO [Listener at localhost/45051] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36693 2023-05-30 20:01:22,451 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,452 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,452 INFO [Listener at localhost/45051] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36693 connecting to ZooKeeper ensemble=127.0.0.1:56671 2023-05-30 20:01:22,460 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:366930x0, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 20:01:22,461 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36693-0x1007daebb020000 connected 2023-05-30 20:01:22,477 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 20:01:22,478 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:22,478 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 20:01:22,478 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36693 2023-05-30 20:01:22,479 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36693 2023-05-30 20:01:22,479 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36693 2023-05-30 20:01:22,482 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36693 2023-05-30 20:01:22,482 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36693 2023-05-30 20:01:22,482 INFO [Listener at localhost/45051] master.HMaster(444): hbase.rootdir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567, hbase.cluster.distributed=false 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-30 20:01:22,495 INFO [Listener at localhost/45051] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-30 20:01:22,497 INFO [Listener at localhost/45051] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33051 2023-05-30 20:01:22,497 INFO [Listener at localhost/45051] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-30 20:01:22,498 DEBUG [Listener at localhost/45051] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-30 20:01:22,498 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,499 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,500 INFO [Listener at localhost/45051] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33051 connecting to ZooKeeper ensemble=127.0.0.1:56671 2023-05-30 20:01:22,505 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:330510x0, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-30 20:01:22,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33051-0x1007daebb020001 connected 2023-05-30 20:01:22,506 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 20:01:22,507 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:22,507 DEBUG [Listener at localhost/45051] zookeeper.ZKUtil(164): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-30 20:01:22,510 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33051 2023-05-30 20:01:22,510 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33051 2023-05-30 20:01:22,510 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33051 2023-05-30 20:01:22,512 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33051 2023-05-30 20:01:22,513 DEBUG [Listener at localhost/45051] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33051 2023-05-30 20:01:22,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,515 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 20:01:22,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,517 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 20:01:22,517 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-30 20:01:22,517 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,518 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 20:01:22,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36693,1685476882448 from backup master directory 2023-05-30 20:01:22,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-30 20:01:22,520 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,520 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 20:01:22,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,520 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-30 20:01:22,532 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/hbase.id with ID: 60dccdec-f423-4795-a6a1-f1aae75d4006 2023-05-30 20:01:22,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:22,542 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,552 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x63c860bd to 127.0.0.1:56671 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 20:01:22,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@232b8e9d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 20:01:22,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-30 20:01:22,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-30 20:01:22,556 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 20:01:22,557 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store-tmp 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 20:01:22,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:22,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:22,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 20:01:22,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/WALs/jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,566 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36693%2C1685476882448, suffix=, logDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/WALs/jenkins-hbase4.apache.org,36693,1685476882448, archiveDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/oldWALs, maxLogs=10 2023-05-30 20:01:22,570 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/WALs/jenkins-hbase4.apache.org,36693,1685476882448/jenkins-hbase4.apache.org%2C36693%2C1685476882448.1685476882566 2023-05-30 20:01:22,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46215,DS-844db66f-03b6-474d-8763-caec97ded0a6,DISK], DatanodeInfoWithStorage[127.0.0.1:41183,DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f,DISK]] 2023-05-30 20:01:22,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-30 20:01:22,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:22,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,572 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,573 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-30 20:01:22,573 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-30 20:01:22,574 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-30 20:01:22,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 20:01:22,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855242, jitterRate=0.08749677240848541}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 20:01:22,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 20:01:22,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-30 20:01:22,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-30 20:01:22,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-30 20:01:22,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-30 20:01:22,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-30 20:01:22,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-30 20:01:22,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-30 20:01:22,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-30 20:01:22,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-30 20:01:22,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-30 20:01:22,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-30 20:01:22,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-30 20:01:22,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-30 20:01:22,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-30 20:01:22,595 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-30 20:01:22,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-30 20:01:22,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-30 20:01:22,599 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:22,599 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:22,599 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36693,1685476882448, sessionid=0x1007daebb020000, setting cluster-up flag (Was=false) 2023-05-30 20:01:22,603 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-30 20:01:22,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,611 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-30 20:01:22,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:22,616 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.hbase-snapshot/.tmp 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 20:01:22,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685476912625 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-30 20:01:22,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,627 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 20:01:22,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-30 20:01:22,627 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-30 20:01:22,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-30 20:01:22,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-30 20:01:22,628 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 20:01:22,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-30 20:01:22,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-30 20:01:22,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476882633,5,FailOnTimeoutGroup] 2023-05-30 20:01:22,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476882634,5,FailOnTimeoutGroup] 2023-05-30 20:01:22,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-30 20:01:22,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,640 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 20:01:22,640 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-30 20:01:22,640 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567 2023-05-30 20:01:22,647 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:22,648 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 20:01:22,649 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/info 2023-05-30 20:01:22,649 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 20:01:22,650 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,650 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 20:01:22,651 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/rep_barrier 2023-05-30 20:01:22,651 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 20:01:22,652 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,652 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 20:01:22,652 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/table 2023-05-30 20:01:22,653 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 20:01:22,653 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,654 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740 2023-05-30 20:01:22,654 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740 2023-05-30 20:01:22,656 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 20:01:22,657 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 20:01:22,658 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 20:01:22,659 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=753723, jitterRate=-0.0415920615196228}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 20:01:22,659 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 20:01:22,659 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 20:01:22,659 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 20:01:22,660 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-30 20:01:22,660 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-30 20:01:22,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-30 20:01:22,662 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-30 20:01:22,663 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-30 20:01:22,715 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(951): ClusterId : 60dccdec-f423-4795-a6a1-f1aae75d4006 2023-05-30 20:01:22,715 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-30 20:01:22,718 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-30 20:01:22,718 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-30 20:01:22,721 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-30 20:01:22,722 DEBUG [RS:0;jenkins-hbase4:33051] zookeeper.ReadOnlyZKClient(139): Connect 0x443b8d00 to 127.0.0.1:56671 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 20:01:22,725 DEBUG [RS:0;jenkins-hbase4:33051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4afc6bf7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 20:01:22,725 DEBUG [RS:0;jenkins-hbase4:33051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ec3e670, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 20:01:22,733 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33051 2023-05-30 20:01:22,734 INFO [RS:0;jenkins-hbase4:33051] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-30 20:01:22,734 INFO [RS:0;jenkins-hbase4:33051] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-30 20:01:22,734 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1022): About to register with Master. 2023-05-30 20:01:22,734 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36693,1685476882448 with isa=jenkins-hbase4.apache.org/172.31.14.131:33051, startcode=1685476882495 2023-05-30 20:01:22,734 DEBUG [RS:0;jenkins-hbase4:33051] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-30 20:01:22,737 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51999, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-30 20:01:22,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36693] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,738 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567 2023-05-30 20:01:22,738 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37877 2023-05-30 20:01:22,738 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-30 20:01:22,740 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 20:01:22,740 DEBUG [RS:0;jenkins-hbase4:33051] zookeeper.ZKUtil(162): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,740 WARN [RS:0;jenkins-hbase4:33051] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-30 20:01:22,740 INFO [RS:0;jenkins-hbase4:33051] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 20:01:22,741 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,741 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33051,1685476882495] 2023-05-30 20:01:22,744 DEBUG [RS:0;jenkins-hbase4:33051] zookeeper.ZKUtil(162): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,745 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-30 20:01:22,745 INFO [RS:0;jenkins-hbase4:33051] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-30 20:01:22,746 INFO [RS:0;jenkins-hbase4:33051] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-30 20:01:22,746 INFO [RS:0;jenkins-hbase4:33051] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-30 20:01:22,747 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,747 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-30 20:01:22,748 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,748 DEBUG [RS:0;jenkins-hbase4:33051] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-30 20:01:22,750 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,750 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,750 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,761 INFO [RS:0;jenkins-hbase4:33051] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-30 20:01:22,761 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33051,1685476882495-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:22,771 INFO [RS:0;jenkins-hbase4:33051] regionserver.Replication(203): jenkins-hbase4.apache.org,33051,1685476882495 started 2023-05-30 20:01:22,771 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33051,1685476882495, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33051, sessionid=0x1007daebb020001 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33051,1685476882495' 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33051,1685476882495' 2023-05-30 20:01:22,771 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-30 20:01:22,772 DEBUG [RS:0;jenkins-hbase4:33051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-30 20:01:22,772 DEBUG [RS:0;jenkins-hbase4:33051] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-30 20:01:22,772 INFO [RS:0;jenkins-hbase4:33051] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-30 20:01:22,772 INFO [RS:0;jenkins-hbase4:33051] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-30 20:01:22,813 DEBUG [jenkins-hbase4:36693] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-30 20:01:22,814 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33051,1685476882495, state=OPENING 2023-05-30 20:01:22,815 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-30 20:01:22,818 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:22,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33051,1685476882495}] 2023-05-30 20:01:22,818 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 20:01:22,873 INFO [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33051%2C1685476882495, suffix=, logDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495, archiveDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs, maxLogs=32 2023-05-30 20:01:22,883 INFO [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495/jenkins-hbase4.apache.org%2C33051%2C1685476882495.1685476882874 2023-05-30 20:01:22,884 DEBUG [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41183,DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f,DISK], DatanodeInfoWithStorage[127.0.0.1:46215,DS-844db66f-03b6-474d-8763-caec97ded0a6,DISK]] 2023-05-30 20:01:22,972 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:22,972 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-30 20:01:22,974 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-30 20:01:22,977 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-30 20:01:22,978 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 20:01:22,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33051%2C1685476882495.meta, suffix=.meta, logDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495, archiveDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs, maxLogs=32 2023-05-30 20:01:22,986 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495/jenkins-hbase4.apache.org%2C33051%2C1685476882495.meta.1685476882980.meta 2023-05-30 20:01:22,986 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41183,DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f,DISK], DatanodeInfoWithStorage[127.0.0.1:46215,DS-844db66f-03b6-474d-8763-caec97ded0a6,DISK]] 2023-05-30 20:01:22,986 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-30 20:01:22,986 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-30 20:01:22,986 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-30 20:01:22,987 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-30 20:01:22,987 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-30 20:01:22,987 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:22,987 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-30 20:01:22,987 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-30 20:01:22,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-30 20:01:22,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/info 2023-05-30 20:01:22,989 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/info 2023-05-30 20:01:22,989 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-30 20:01:22,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,990 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-30 20:01:22,990 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/rep_barrier 2023-05-30 20:01:22,991 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/rep_barrier 2023-05-30 20:01:22,991 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-30 20:01:22,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,991 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-30 20:01:22,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/table 2023-05-30 20:01:22,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/table 2023-05-30 20:01:22,992 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-30 20:01:22,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:22,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740 2023-05-30 20:01:22,994 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740 2023-05-30 20:01:22,996 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-30 20:01:22,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-30 20:01:22,998 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=705438, jitterRate=-0.10299013555049896}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-30 20:01:22,998 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-30 20:01:23,001 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685476882972 2023-05-30 20:01:23,004 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-30 20:01:23,004 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-30 20:01:23,005 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33051,1685476882495, state=OPEN 2023-05-30 20:01:23,006 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-30 20:01:23,006 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-30 20:01:23,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-30 20:01:23,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33051,1685476882495 in 188 msec 2023-05-30 20:01:23,010 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-30 20:01:23,010 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 348 msec 2023-05-30 20:01:23,011 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 393 msec 2023-05-30 20:01:23,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685476883011, completionTime=-1 2023-05-30 20:01:23,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-30 20:01:23,012 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-30 20:01:23,014 DEBUG [hconnection-0x6c9c4acd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 20:01:23,016 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37274, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 20:01:23,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-30 20:01:23,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685476943017 2023-05-30 20:01:23,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685477003017 2023-05-30 20:01:23,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36693,1685476882448-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36693,1685476882448-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36693,1685476882448-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36693, period=300000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-30 20:01:23,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-30 20:01:23,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-30 20:01:23,025 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-30 20:01:23,026 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-30 20:01:23,027 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-30 20:01:23,028 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.tmp/data/hbase/namespace/cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,029 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.tmp/data/hbase/namespace/cffeb10de674958ce47fde620b68f176 empty. 2023-05-30 20:01:23,029 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.tmp/data/hbase/namespace/cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,029 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-30 20:01:23,041 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-30 20:01:23,043 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cffeb10de674958ce47fde620b68f176, NAME => 'hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/.tmp 2023-05-30 20:01:23,052 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:23,052 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cffeb10de674958ce47fde620b68f176, disabling compactions & flushes 2023-05-30 20:01:23,053 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,053 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,053 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. after waiting 0 ms 2023-05-30 20:01:23,053 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,053 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,053 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cffeb10de674958ce47fde620b68f176: 2023-05-30 20:01:23,055 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-30 20:01:23,056 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476883055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685476883055"}]},"ts":"1685476883055"} 2023-05-30 20:01:23,058 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-30 20:01:23,058 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-30 20:01:23,059 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476883058"}]},"ts":"1685476883058"} 2023-05-30 20:01:23,059 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-30 20:01:23,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cffeb10de674958ce47fde620b68f176, ASSIGN}] 2023-05-30 20:01:23,071 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cffeb10de674958ce47fde620b68f176, ASSIGN 2023-05-30 20:01:23,072 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cffeb10de674958ce47fde620b68f176, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33051,1685476882495; forceNewPlan=false, retain=false 2023-05-30 20:01:23,223 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cffeb10de674958ce47fde620b68f176, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,223 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476883223"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685476883223"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685476883223"}]},"ts":"1685476883223"} 2023-05-30 20:01:23,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure cffeb10de674958ce47fde620b68f176, server=jenkins-hbase4.apache.org,33051,1685476882495}] 2023-05-30 20:01:23,380 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,380 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cffeb10de674958ce47fde620b68f176, NAME => 'hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.', STARTKEY => '', ENDKEY => ''} 2023-05-30 20:01:23,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-30 20:01:23,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,382 INFO [StoreOpener-cffeb10de674958ce47fde620b68f176-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,383 DEBUG [StoreOpener-cffeb10de674958ce47fde620b68f176-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/info 2023-05-30 20:01:23,384 DEBUG [StoreOpener-cffeb10de674958ce47fde620b68f176-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/info 2023-05-30 20:01:23,384 INFO [StoreOpener-cffeb10de674958ce47fde620b68f176-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cffeb10de674958ce47fde620b68f176 columnFamilyName info 2023-05-30 20:01:23,385 INFO [StoreOpener-cffeb10de674958ce47fde620b68f176-1] regionserver.HStore(310): Store=cffeb10de674958ce47fde620b68f176/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-30 20:01:23,385 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,386 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,389 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,391 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-30 20:01:23,391 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cffeb10de674958ce47fde620b68f176; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=850750, jitterRate=0.08178551495075226}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-30 20:01:23,391 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cffeb10de674958ce47fde620b68f176: 2023-05-30 20:01:23,393 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176., pid=6, masterSystemTime=1685476883377 2023-05-30 20:01:23,395 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,438 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,447 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cffeb10de674958ce47fde620b68f176, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,449 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685476883444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685476883444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685476883444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685476883444"}]},"ts":"1685476883444"} 2023-05-30 20:01:23,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-30 20:01:23,453 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure cffeb10de674958ce47fde620b68f176, server=jenkins-hbase4.apache.org,33051,1685476882495 in 226 msec 2023-05-30 20:01:23,455 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-30 20:01:23,455 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cffeb10de674958ce47fde620b68f176, ASSIGN in 386 msec 2023-05-30 20:01:23,456 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-30 20:01:23,456 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685476883456"}]},"ts":"1685476883456"} 2023-05-30 20:01:23,457 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-30 20:01:23,460 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-30 20:01:23,461 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 436 msec 2023-05-30 20:01:23,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-30 20:01:23,541 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-30 20:01:23,541 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:23,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-30 20:01:23,553 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 20:01:23,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-30 20:01:23,566 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-30 20:01:23,572 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-30 20:01:23,576 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-30 20:01:23,581 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-30 20:01:23,583 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.062sec 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36693,1685476882448-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-30 20:01:23,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36693,1685476882448-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-30 20:01:23,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-30 20:01:23,640 DEBUG [Listener at localhost/45051] zookeeper.ReadOnlyZKClient(139): Connect 0x25217372 to 127.0.0.1:56671 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-30 20:01:23,645 DEBUG [Listener at localhost/45051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c6ed1f7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-30 20:01:23,647 DEBUG [hconnection-0x7798c8c9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-30 20:01:23,648 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-30 20:01:23,650 INFO [Listener at localhost/45051] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:23,650 INFO [Listener at localhost/45051] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-30 20:01:23,653 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-30 20:01:23,653 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:23,654 INFO [Listener at localhost/45051] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-30 20:01:23,654 INFO [Listener at localhost/45051] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-30 20:01:23,656 INFO [Listener at localhost/45051] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1, archiveDir=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs, maxLogs=32 2023-05-30 20:01:23,662 INFO [Listener at localhost/45051] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1/test.com%2C8080%2C1.1685476883656 2023-05-30 20:01:23,662 DEBUG [Listener at localhost/45051] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46215,DS-844db66f-03b6-474d-8763-caec97ded0a6,DISK], DatanodeInfoWithStorage[127.0.0.1:41183,DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f,DISK]] 2023-05-30 20:01:23,668 INFO [Listener at localhost/45051] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1/test.com%2C8080%2C1.1685476883656 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1/test.com%2C8080%2C1.1685476883662 2023-05-30 20:01:23,668 DEBUG [Listener at localhost/45051] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41183,DS-4b5d4074-0628-4e7d-97ec-7e73bc3fd99f,DISK], DatanodeInfoWithStorage[127.0.0.1:46215,DS-844db66f-03b6-474d-8763-caec97ded0a6,DISK]] 2023-05-30 20:01:23,668 DEBUG [Listener at localhost/45051] wal.AbstractFSWAL(716): hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1/test.com%2C8080%2C1.1685476883656 is not closed yet, will try archiving it next time 2023-05-30 20:01:23,669 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1 2023-05-30 20:01:23,677 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/test.com,8080,1/test.com%2C8080%2C1.1685476883656 to hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs/test.com%2C8080%2C1.1685476883656 2023-05-30 20:01:23,679 DEBUG [Listener at localhost/45051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs 2023-05-30 20:01:23,679 INFO [Listener at localhost/45051] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685476883662) 2023-05-30 20:01:23,679 INFO [Listener at localhost/45051] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-30 20:01:23,679 DEBUG [Listener at localhost/45051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x25217372 to 127.0.0.1:56671 2023-05-30 20:01:23,679 DEBUG [Listener at localhost/45051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:23,680 DEBUG [Listener at localhost/45051] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-30 20:01:23,680 DEBUG [Listener at localhost/45051] util.JVMClusterUtil(257): Found active master hash=641766345, stopped=false 2023-05-30 20:01:23,680 INFO [Listener at localhost/45051] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:23,685 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:23,685 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-30 20:01:23,685 INFO [Listener at localhost/45051] procedure2.ProcedureExecutor(629): Stopping 2023-05-30 20:01:23,685 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:23,686 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:23,686 DEBUG [Listener at localhost/45051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63c860bd to 127.0.0.1:56671 2023-05-30 20:01:23,686 DEBUG [Listener at localhost/45051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:23,686 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-30 20:01:23,687 INFO [Listener at localhost/45051] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33051,1685476882495' ***** 2023-05-30 20:01:23,687 INFO [Listener at localhost/45051] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-30 20:01:23,687 INFO [RS:0;jenkins-hbase4:33051] regionserver.HeapMemoryManager(220): Stopping 2023-05-30 20:01:23,687 INFO [RS:0;jenkins-hbase4:33051] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-30 20:01:23,687 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-30 20:01:23,687 INFO [RS:0;jenkins-hbase4:33051] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-30 20:01:23,687 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(3303): Received CLOSE for cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,688 DEBUG [RS:0;jenkins-hbase4:33051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x443b8d00 to 127.0.0.1:56671 2023-05-30 20:01:23,688 DEBUG [RS:0;jenkins-hbase4:33051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-30 20:01:23,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cffeb10de674958ce47fde620b68f176, disabling compactions & flushes 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-30 20:01:23,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-30 20:01:23,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. after waiting 0 ms 2023-05-30 20:01:23,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,688 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-30 20:01:23,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cffeb10de674958ce47fde620b68f176 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-30 20:01:23,688 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1478): Online Regions={cffeb10de674958ce47fde620b68f176=hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176., 1588230740=hbase:meta,,1.1588230740} 2023-05-30 20:01:23,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-30 20:01:23,689 DEBUG [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1504): Waiting on 1588230740, cffeb10de674958ce47fde620b68f176 2023-05-30 20:01:23,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-30 20:01:23,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-30 20:01:23,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-30 20:01:23,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-30 20:01:23,689 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-30 20:01:23,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/.tmp/info/da14679d78de4758ab7814c6f30fc949 2023-05-30 20:01:23,700 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/.tmp/info/20777f8c3d024892b7d35043337115bc 2023-05-30 20:01:23,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/.tmp/info/da14679d78de4758ab7814c6f30fc949 as hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/info/da14679d78de4758ab7814c6f30fc949 2023-05-30 20:01:23,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/info/da14679d78de4758ab7814c6f30fc949, entries=2, sequenceid=6, filesize=4.8 K 2023-05-30 20:01:23,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for cffeb10de674958ce47fde620b68f176 in 25ms, sequenceid=6, compaction requested=false 2023-05-30 20:01:23,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/.tmp/table/c9a590e086434012a72d7d450e5fb3fb 2023-05-30 20:01:23,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/namespace/cffeb10de674958ce47fde620b68f176/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-30 20:01:23,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,720 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cffeb10de674958ce47fde620b68f176: 2023-05-30 20:01:23,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685476883024.cffeb10de674958ce47fde620b68f176. 2023-05-30 20:01:23,723 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/.tmp/info/20777f8c3d024892b7d35043337115bc as hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/info/20777f8c3d024892b7d35043337115bc 2023-05-30 20:01:23,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/info/20777f8c3d024892b7d35043337115bc, entries=10, sequenceid=9, filesize=5.9 K 2023-05-30 20:01:23,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/.tmp/table/c9a590e086434012a72d7d450e5fb3fb as hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/table/c9a590e086434012a72d7d450e5fb3fb 2023-05-30 20:01:23,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/table/c9a590e086434012a72d7d450e5fb3fb, entries=2, sequenceid=9, filesize=4.7 K 2023-05-30 20:01:23,732 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 43ms, sequenceid=9, compaction requested=false 2023-05-30 20:01:23,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-30 20:01:23,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-30 20:01:23,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-30 20:01:23,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-30 20:01:23,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-30 20:01:23,769 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-30 20:01:23,769 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-30 20:01:23,889 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33051,1685476882495; all regions closed. 2023-05-30 20:01:23,889 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,894 DEBUG [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs 2023-05-30 20:01:23,894 INFO [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33051%2C1685476882495.meta:.meta(num 1685476882980) 2023-05-30 20:01:23,894 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/WALs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,898 DEBUG [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/oldWALs 2023-05-30 20:01:23,898 INFO [RS:0;jenkins-hbase4:33051] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33051%2C1685476882495:(num 1685476882874) 2023-05-30 20:01:23,898 DEBUG [RS:0;jenkins-hbase4:33051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:23,898 INFO [RS:0;jenkins-hbase4:33051] regionserver.LeaseManager(133): Closed leases 2023-05-30 20:01:23,898 INFO [RS:0;jenkins-hbase4:33051] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-30 20:01:23,899 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 20:01:23,899 INFO [RS:0;jenkins-hbase4:33051] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33051 2023-05-30 20:01:23,903 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33051,1685476882495 2023-05-30 20:01:23,903 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 20:01:23,903 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-30 20:01:23,905 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33051,1685476882495] 2023-05-30 20:01:23,905 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33051,1685476882495; numProcessing=1 2023-05-30 20:01:23,906 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33051,1685476882495 already deleted, retry=false 2023-05-30 20:01:23,906 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33051,1685476882495 expired; onlineServers=0 2023-05-30 20:01:23,906 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36693,1685476882448' ***** 2023-05-30 20:01:23,906 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-30 20:01:23,906 DEBUG [M:0;jenkins-hbase4:36693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@227f03d9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-30 20:01:23,906 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:23,906 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36693,1685476882448; all regions closed. 2023-05-30 20:01:23,906 DEBUG [M:0;jenkins-hbase4:36693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-30 20:01:23,906 DEBUG [M:0;jenkins-hbase4:36693] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-30 20:01:23,907 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-30 20:01:23,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476882634] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685476882634,5,FailOnTimeoutGroup] 2023-05-30 20:01:23,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476882633] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685476882633,5,FailOnTimeoutGroup] 2023-05-30 20:01:23,907 DEBUG [M:0;jenkins-hbase4:36693] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-30 20:01:23,908 INFO [M:0;jenkins-hbase4:36693] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-30 20:01:23,908 INFO [M:0;jenkins-hbase4:36693] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-30 20:01:23,908 INFO [M:0;jenkins-hbase4:36693] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-30 20:01:23,908 DEBUG [M:0;jenkins-hbase4:36693] master.HMaster(1512): Stopping service threads 2023-05-30 20:01:23,908 INFO [M:0;jenkins-hbase4:36693] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-30 20:01:23,908 ERROR [M:0;jenkins-hbase4:36693] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-30 20:01:23,908 INFO [M:0;jenkins-hbase4:36693] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-30 20:01:23,908 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-30 20:01:23,909 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-30 20:01:23,909 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-30 20:01:23,909 DEBUG [M:0;jenkins-hbase4:36693] zookeeper.ZKUtil(398): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-30 20:01:23,909 WARN [M:0;jenkins-hbase4:36693] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-30 20:01:23,909 INFO [M:0;jenkins-hbase4:36693] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-30 20:01:23,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-30 20:01:23,909 INFO [M:0;jenkins-hbase4:36693] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-30 20:01:23,910 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-30 20:01:23,910 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:23,910 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:23,910 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-30 20:01:23,910 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:23,910 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-30 20:01:23,919 INFO [M:0;jenkins-hbase4:36693] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9ec02e73a4d44aed85fdc9db169deee6 2023-05-30 20:01:23,924 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9ec02e73a4d44aed85fdc9db169deee6 as hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9ec02e73a4d44aed85fdc9db169deee6 2023-05-30 20:01:23,928 INFO [M:0;jenkins-hbase4:36693] regionserver.HStore(1080): Added hdfs://localhost:37877/user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9ec02e73a4d44aed85fdc9db169deee6, entries=8, sequenceid=66, filesize=6.3 K 2023-05-30 20:01:23,928 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 18ms, sequenceid=66, compaction requested=false 2023-05-30 20:01:23,929 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-30 20:01:23,930 DEBUG [M:0;jenkins-hbase4:36693] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-30 20:01:23,930 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/02e77fdd-6220-06b6-5231-7f7f824a8567/MasterData/WALs/jenkins-hbase4.apache.org,36693,1685476882448 2023-05-30 20:01:23,932 INFO [M:0;jenkins-hbase4:36693] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-30 20:01:23,932 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-30 20:01:23,933 INFO [M:0;jenkins-hbase4:36693] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36693 2023-05-30 20:01:23,936 DEBUG [M:0;jenkins-hbase4:36693] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36693,1685476882448 already deleted, retry=false 2023-05-30 20:01:24,081 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:24,081 INFO [M:0;jenkins-hbase4:36693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36693,1685476882448; zookeeper connection closed. 2023-05-30 20:01:24,082 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): master:36693-0x1007daebb020000, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:24,182 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:24,182 INFO [RS:0;jenkins-hbase4:33051] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33051,1685476882495; zookeeper connection closed. 2023-05-30 20:01:24,182 DEBUG [Listener at localhost/45051-EventThread] zookeeper.ZKWatcher(600): regionserver:33051-0x1007daebb020001, quorum=127.0.0.1:56671, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-30 20:01:24,182 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1b20234] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1b20234 2023-05-30 20:01:24,183 INFO [Listener at localhost/45051] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-30 20:01:24,183 WARN [Listener at localhost/45051] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 20:01:24,186 INFO [Listener at localhost/45051] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:24,291 WARN [BP-1718174366-172.31.14.131-1685476881917 heartbeating to localhost/127.0.0.1:37877] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-30 20:01:24,291 WARN [BP-1718174366-172.31.14.131-1685476881917 heartbeating to localhost/127.0.0.1:37877] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1718174366-172.31.14.131-1685476881917 (Datanode Uuid 860d4138-19d4-4714-bc5f-edc50b68934e) service to localhost/127.0.0.1:37877 2023-05-30 20:01:24,291 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/dfs/data/data3/current/BP-1718174366-172.31.14.131-1685476881917] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:24,292 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/dfs/data/data4/current/BP-1718174366-172.31.14.131-1685476881917] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:24,292 WARN [Listener at localhost/45051] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-30 20:01:24,295 INFO [Listener at localhost/45051] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:24,307 WARN [BP-1718174366-172.31.14.131-1685476881917 heartbeating to localhost/127.0.0.1:37877] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1718174366-172.31.14.131-1685476881917 (Datanode Uuid 4cdc0625-ac6d-4dfb-a1cb-c681f20aaec9) service to localhost/127.0.0.1:37877 2023-05-30 20:01:24,308 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/dfs/data/data1/current/BP-1718174366-172.31.14.131-1685476881917] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:24,308 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/929a6493-35bc-c4b2-e113-cfb5227a6061/cluster_482b22b9-d767-3297-f97b-b41211c1f35e/dfs/data/data2/current/BP-1718174366-172.31.14.131-1685476881917] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-30 20:01:24,407 INFO [Listener at localhost/45051] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-30 20:01:24,518 INFO [Listener at localhost/45051] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-30 20:01:24,528 INFO [Listener at localhost/45051] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-30 20:01:24,540 INFO [Listener at localhost/45051] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 105) - Thread LEAK? -, OpenFileDescriptor=564 (was 540) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=34 (was 34), ProcessCount=170 (was 170), AvailableMemoryMB=2422 (was 2461)