2023-05-31 07:57:29,369 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de 2023-05-31 07:57:29,381 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-31 07:57:29,407 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=71, ProcessCount=167, AvailableMemoryMB=8805 2023-05-31 07:57:29,414 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 07:57:29,414 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68, deleteOnExit=true 2023-05-31 07:57:29,414 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 07:57:29,415 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/test.cache.data in system properties and HBase conf 2023-05-31 07:57:29,415 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 07:57:29,416 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/hadoop.log.dir in system properties and HBase conf 2023-05-31 07:57:29,416 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 07:57:29,417 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 07:57:29,417 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 07:57:29,507 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-31 07:57:29,838 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 07:57:29,841 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:57:29,841 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:57:29,842 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 07:57:29,842 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:57:29,843 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 07:57:29,843 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 07:57:29,843 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:57:29,843 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:57:29,844 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 07:57:29,844 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/nfs.dump.dir in system properties and HBase conf 2023-05-31 07:57:29,844 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/java.io.tmpdir in system properties and HBase conf 2023-05-31 07:57:29,844 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:57:29,845 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 07:57:29,845 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 07:57:30,229 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:57:30,240 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:57:30,243 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:57:30,679 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-31 07:57:30,806 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-31 07:57:30,820 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:57:30,852 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:57:30,912 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/java.io.tmpdir/Jetty_localhost_localdomain_32985_hdfs____.whpd67/webapp 2023-05-31 07:57:31,014 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:32985 2023-05-31 07:57:31,020 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:57:31,022 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:57:31,022 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:57:31,547 WARN [Listener at localhost.localdomain/43311] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:57:31,602 WARN [Listener at localhost.localdomain/43311] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:57:31,617 WARN [Listener at localhost.localdomain/43311] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:57:31,622 INFO [Listener at localhost.localdomain/43311] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:57:31,627 INFO [Listener at localhost.localdomain/43311] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/java.io.tmpdir/Jetty_localhost_34401_datanode____.j63obp/webapp 2023-05-31 07:57:31,700 INFO [Listener at localhost.localdomain/43311] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34401 2023-05-31 07:57:31,963 WARN [Listener at localhost.localdomain/35609] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:57:31,972 WARN [Listener at localhost.localdomain/35609] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:57:31,975 WARN [Listener at localhost.localdomain/35609] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:57:31,977 INFO [Listener at localhost.localdomain/35609] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:57:31,982 INFO [Listener at localhost.localdomain/35609] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/java.io.tmpdir/Jetty_localhost_34053_datanode____.scgi4s/webapp 2023-05-31 07:57:32,061 INFO [Listener at localhost.localdomain/35609] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34053 2023-05-31 07:57:32,072 WARN [Listener at localhost.localdomain/36673] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:57:32,738 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe153e3979db23ea6: Processing first storage report for DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d from datanode 8f2f76b6-30b4-4d69-b823-07933661c07b 2023-05-31 07:57:32,739 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe153e3979db23ea6: from storage DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d node DatanodeRegistration(127.0.0.1:40851, datanodeUuid=8f2f76b6-30b4-4d69-b823-07933661c07b, infoPort=39649, infoSecurePort=0, ipcPort=36673, storageInfo=lv=-57;cid=testClusterID;nsid=1475805891;c=1685519850299), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc8bc7ec88d7a54a5: Processing first storage report for DS-f9e2b882-a85b-417f-ae3b-bc2982149160 from datanode 1b5abf8e-013e-40e3-a804-ad6d9d3d6cc0 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc8bc7ec88d7a54a5: from storage DS-f9e2b882-a85b-417f-ae3b-bc2982149160 node DatanodeRegistration(127.0.0.1:41287, datanodeUuid=1b5abf8e-013e-40e3-a804-ad6d9d3d6cc0, infoPort=38473, infoSecurePort=0, ipcPort=35609, storageInfo=lv=-57;cid=testClusterID;nsid=1475805891;c=1685519850299), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe153e3979db23ea6: Processing first storage report for DS-a4136b2d-8f9f-46be-bd21-c37c9ef062a7 from datanode 8f2f76b6-30b4-4d69-b823-07933661c07b 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe153e3979db23ea6: from storage DS-a4136b2d-8f9f-46be-bd21-c37c9ef062a7 node DatanodeRegistration(127.0.0.1:40851, datanodeUuid=8f2f76b6-30b4-4d69-b823-07933661c07b, infoPort=39649, infoSecurePort=0, ipcPort=36673, storageInfo=lv=-57;cid=testClusterID;nsid=1475805891;c=1685519850299), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc8bc7ec88d7a54a5: Processing first storage report for DS-35d70ed3-15b6-44b6-965c-6e7d98c97541 from datanode 1b5abf8e-013e-40e3-a804-ad6d9d3d6cc0 2023-05-31 07:57:32,740 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc8bc7ec88d7a54a5: from storage DS-35d70ed3-15b6-44b6-965c-6e7d98c97541 node DatanodeRegistration(127.0.0.1:41287, datanodeUuid=1b5abf8e-013e-40e3-a804-ad6d9d3d6cc0, infoPort=38473, infoSecurePort=0, ipcPort=35609, storageInfo=lv=-57;cid=testClusterID;nsid=1475805891;c=1685519850299), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:57:32,795 DEBUG [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de 2023-05-31 07:57:32,859 INFO [Listener at localhost.localdomain/36673] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/zookeeper_0, clientPort=49338, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 07:57:32,872 INFO [Listener at localhost.localdomain/36673] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49338 2023-05-31 07:57:32,879 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:32,881 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:33,519 INFO [Listener at localhost.localdomain/36673] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338 with version=8 2023-05-31 07:57:33,519 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 07:57:33,786 INFO [Listener at localhost.localdomain/36673] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-31 07:57:34,153 INFO [Listener at localhost.localdomain/36673] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:57:34,178 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,178 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,178 INFO [Listener at localhost.localdomain/36673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:57:34,178 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,178 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:57:34,303 INFO [Listener at localhost.localdomain/36673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:57:34,382 DEBUG [Listener at localhost.localdomain/36673] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-31 07:57:34,456 INFO [Listener at localhost.localdomain/36673] ipc.NettyRpcServer(120): Bind to /188.40.62.62:43657 2023-05-31 07:57:34,465 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:34,466 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:34,484 INFO [Listener at localhost.localdomain/36673] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43657 connecting to ZooKeeper ensemble=127.0.0.1:49338 2023-05-31 07:57:34,566 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:436570x0, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:57:34,569 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43657-0x100803e236d0000 connected 2023-05-31 07:57:34,669 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:57:34,671 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:57:34,675 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:57:34,683 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43657 2023-05-31 07:57:34,684 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43657 2023-05-31 07:57:34,684 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43657 2023-05-31 07:57:34,684 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43657 2023-05-31 07:57:34,684 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43657 2023-05-31 07:57:34,689 INFO [Listener at localhost.localdomain/36673] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338, hbase.cluster.distributed=false 2023-05-31 07:57:34,750 INFO [Listener at localhost.localdomain/36673] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:57:34,751 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,751 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,751 INFO [Listener at localhost.localdomain/36673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:57:34,751 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:57:34,751 INFO [Listener at localhost.localdomain/36673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:57:34,755 INFO [Listener at localhost.localdomain/36673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:57:34,758 INFO [Listener at localhost.localdomain/36673] ipc.NettyRpcServer(120): Bind to /188.40.62.62:33311 2023-05-31 07:57:34,760 INFO [Listener at localhost.localdomain/36673] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 07:57:34,764 DEBUG [Listener at localhost.localdomain/36673] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 07:57:34,765 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:34,767 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:34,769 INFO [Listener at localhost.localdomain/36673] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33311 connecting to ZooKeeper ensemble=127.0.0.1:49338 2023-05-31 07:57:34,781 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:333110x0, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:57:34,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33311-0x100803e236d0001 connected 2023-05-31 07:57:34,782 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:57:34,784 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:57:34,784 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ZKUtil(164): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:57:34,785 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33311 2023-05-31 07:57:34,785 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33311 2023-05-31 07:57:34,786 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33311 2023-05-31 07:57:34,786 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33311 2023-05-31 07:57:34,787 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33311 2023-05-31 07:57:34,789 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:34,806 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:57:34,808 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:34,835 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:57:34,835 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:57:34,836 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:34,836 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:57:34,837 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,43657,1685519853629 from backup master directory 2023-05-31 07:57:34,837 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:57:34,848 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:34,848 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:57:34,849 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:57:34,849 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:34,852 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-31 07:57:34,854 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-31 07:57:34,933 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase.id with ID: e4e97f6c-e25a-4a71-a642-720e213cef92 2023-05-31 07:57:34,979 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:35,001 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:35,044 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0065313c to 127.0.0.1:49338 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:57:35,080 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ccf2d7a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:57:35,101 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:57:35,102 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 07:57:35,110 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:57:35,142 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store-tmp 2023-05-31 07:57:35,168 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:35,168 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 07:57:35,169 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:57:35,169 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:57:35,169 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 07:57:35,169 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:57:35,169 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:57:35,169 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:57:35,170 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/WALs/jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:35,190 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C43657%2C1685519853629, suffix=, logDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/WALs/jenkins-hbase16.apache.org,43657,1685519853629, archiveDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/oldWALs, maxLogs=10 2023-05-31 07:57:35,206 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:57:35,227 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/WALs/jenkins-hbase16.apache.org,43657,1685519853629/jenkins-hbase16.apache.org%2C43657%2C1685519853629.1685519855204 2023-05-31 07:57:35,227 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:57:35,228 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:57:35,228 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:35,231 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,232 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,283 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,290 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 07:57:35,309 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 07:57:35,320 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:35,325 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,327 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,342 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:57:35,346 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:57:35,348 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=691223, jitterRate=-0.12106572091579437}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:57:35,348 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:57:35,349 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 07:57:35,371 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 07:57:35,371 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 07:57:35,374 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 07:57:35,376 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-31 07:57:35,406 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 30 msec 2023-05-31 07:57:35,407 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 07:57:35,429 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 07:57:35,434 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 07:57:35,457 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 07:57:35,460 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 07:57:35,462 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 07:57:35,466 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 07:57:35,470 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 07:57:35,502 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:35,503 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 07:57:35,504 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 07:57:35,518 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 07:57:35,531 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:57:35,531 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:57:35,531 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:35,531 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,43657,1685519853629, sessionid=0x100803e236d0000, setting cluster-up flag (Was=false) 2023-05-31 07:57:35,564 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:35,593 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 07:57:35,597 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:35,618 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:35,652 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 07:57:35,654 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:35,656 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.hbase-snapshot/.tmp 2023-05-31 07:57:35,691 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(951): ClusterId : e4e97f6c-e25a-4a71-a642-720e213cef92 2023-05-31 07:57:35,695 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 07:57:35,708 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 07:57:35,708 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 07:57:35,719 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 07:57:35,720 DEBUG [RS:0;jenkins-hbase16:33311] zookeeper.ReadOnlyZKClient(139): Connect 0x66c8cee3 to 127.0.0.1:49338 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:57:35,732 DEBUG [RS:0;jenkins-hbase16:33311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@680f2c4a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:57:35,733 DEBUG [RS:0;jenkins-hbase16:33311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c30a593, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:57:35,753 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:33311 2023-05-31 07:57:35,757 INFO [RS:0;jenkins-hbase16:33311] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 07:57:35,757 INFO [RS:0;jenkins-hbase16:33311] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 07:57:35,757 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 07:57:35,758 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 07:57:35,759 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,43657,1685519853629 with isa=jenkins-hbase16.apache.org/188.40.62.62:33311, startcode=1685519854750 2023-05-31 07:57:35,769 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:57:35,770 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:35,774 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685519885773 2023-05-31 07:57:35,774 DEBUG [RS:0;jenkins-hbase16:33311] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 07:57:35,776 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 07:57:35,779 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:57:35,779 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 07:57:35,785 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:57:35,786 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 07:57:35,794 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 07:57:35,795 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 07:57:35,795 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 07:57:35,795 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 07:57:35,796 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:35,798 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 07:57:35,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 07:57:35,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 07:57:35,805 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 07:57:35,806 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 07:57:35,808 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519855807,5,FailOnTimeoutGroup] 2023-05-31 07:57:35,809 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519855809,5,FailOnTimeoutGroup] 2023-05-31 07:57:35,809 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:35,810 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 07:57:35,811 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:35,811 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:35,827 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:57:35,828 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:57:35,829 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338 2023-05-31 07:57:35,856 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:35,859 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:57:35,862 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/info 2023-05-31 07:57:35,863 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:57:35,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:35,865 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:57:35,868 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:57:35,869 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:57:35,870 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:35,870 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:57:35,873 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/table 2023-05-31 07:57:35,873 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:57:35,875 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:35,876 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740 2023-05-31 07:57:35,878 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740 2023-05-31 07:57:35,881 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:57:35,883 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:57:35,887 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:57:35,888 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=690346, jitterRate=-0.12218058109283447}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:57:35,888 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:57:35,889 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:57:35,889 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:57:35,889 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:57:35,889 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:57:35,889 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:57:35,890 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 07:57:35,890 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:57:35,895 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:57:35,896 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 07:57:35,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 07:57:35,905 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:46113, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 07:57:35,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:35,918 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 07:57:35,921 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 07:57:35,935 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338 2023-05-31 07:57:35,936 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43311 2023-05-31 07:57:35,936 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 07:57:35,947 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:57:35,948 DEBUG [RS:0;jenkins-hbase16:33311] zookeeper.ZKUtil(162): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:35,948 WARN [RS:0;jenkins-hbase16:33311] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:57:35,949 INFO [RS:0;jenkins-hbase16:33311] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:57:35,950 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:35,950 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,33311,1685519854750] 2023-05-31 07:57:35,960 DEBUG [RS:0;jenkins-hbase16:33311] zookeeper.ZKUtil(162): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:35,969 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 07:57:35,977 INFO [RS:0;jenkins-hbase16:33311] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 07:57:35,994 INFO [RS:0;jenkins-hbase16:33311] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 07:57:35,997 INFO [RS:0;jenkins-hbase16:33311] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 07:57:35,997 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:35,998 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 07:57:36,004 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,004 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,004 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,004 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,004 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,004 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,005 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:57:36,005 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,005 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,005 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,005 DEBUG [RS:0;jenkins-hbase16:33311] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:57:36,006 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,006 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,006 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,020 INFO [RS:0;jenkins-hbase16:33311] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 07:57:36,021 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33311,1685519854750-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,034 INFO [RS:0;jenkins-hbase16:33311] regionserver.Replication(203): jenkins-hbase16.apache.org,33311,1685519854750 started 2023-05-31 07:57:36,035 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,33311,1685519854750, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:33311, sessionid=0x100803e236d0001 2023-05-31 07:57:36,035 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 07:57:36,035 DEBUG [RS:0;jenkins-hbase16:33311] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:36,035 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,33311,1685519854750' 2023-05-31 07:57:36,035 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 07:57:36,036 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 07:57:36,036 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 07:57:36,036 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 07:57:36,037 DEBUG [RS:0;jenkins-hbase16:33311] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:36,037 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,33311,1685519854750' 2023-05-31 07:57:36,037 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 07:57:36,037 DEBUG [RS:0;jenkins-hbase16:33311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 07:57:36,038 DEBUG [RS:0;jenkins-hbase16:33311] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 07:57:36,038 INFO [RS:0;jenkins-hbase16:33311] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 07:57:36,038 INFO [RS:0;jenkins-hbase16:33311] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 07:57:36,074 DEBUG [jenkins-hbase16:43657] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 07:57:36,077 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,33311,1685519854750, state=OPENING 2023-05-31 07:57:36,093 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 07:57:36,101 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:36,102 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:57:36,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,33311,1685519854750}] 2023-05-31 07:57:36,147 INFO [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C33311%2C1685519854750, suffix=, logDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750, archiveDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/oldWALs, maxLogs=32 2023-05-31 07:57:36,161 INFO [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519856150 2023-05-31 07:57:36,161 DEBUG [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:57:36,296 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:36,299 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 07:57:36,303 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:58058, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 07:57:36,316 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 07:57:36,317 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:57:36,320 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C33311%2C1685519854750.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750, archiveDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/oldWALs, maxLogs=32 2023-05-31 07:57:36,337 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.meta.1685519856321.meta 2023-05-31 07:57:36,337 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:57:36,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:57:36,340 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 07:57:36,356 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 07:57:36,361 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 07:57:36,365 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 07:57:36,365 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:36,365 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 07:57:36,365 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 07:57:36,368 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:57:36,370 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/info 2023-05-31 07:57:36,370 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/info 2023-05-31 07:57:36,370 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:57:36,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:36,371 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:57:36,373 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:57:36,373 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:57:36,373 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:57:36,374 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:36,374 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:57:36,375 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/table 2023-05-31 07:57:36,376 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/table 2023-05-31 07:57:36,376 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:57:36,377 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:36,379 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740 2023-05-31 07:57:36,382 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740 2023-05-31 07:57:36,386 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:57:36,388 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:57:36,390 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=770179, jitterRate=-0.020667433738708496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:57:36,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:57:36,401 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685519856291 2023-05-31 07:57:36,416 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 07:57:36,417 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 07:57:36,417 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,33311,1685519854750, state=OPEN 2023-05-31 07:57:36,431 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 07:57:36,431 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:57:36,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 07:57:36,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,33311,1685519854750 in 323 msec 2023-05-31 07:57:36,447 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 07:57:36,447 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 535 msec 2023-05-31 07:57:36,457 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 757 msec 2023-05-31 07:57:36,457 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685519856457, completionTime=-1 2023-05-31 07:57:36,458 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 07:57:36,458 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 07:57:36,512 DEBUG [hconnection-0x4f8217d7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:57:36,514 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:58060, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:57:36,529 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 07:57:36,529 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685519916529 2023-05-31 07:57:36,529 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685519976529 2023-05-31 07:57:36,529 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 70 msec 2023-05-31 07:57:36,562 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43657,1685519853629-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,563 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43657,1685519853629-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,563 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43657,1685519853629-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,564 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:43657, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,565 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 07:57:36,571 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 07:57:36,580 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 07:57:36,581 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:57:36,590 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 07:57:36,593 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:57:36,595 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:57:36,620 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/hbase/namespace/c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:36,623 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/hbase/namespace/c2354464ba707b12ed00e906f295b105 empty. 2023-05-31 07:57:36,623 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/hbase/namespace/c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:36,624 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 07:57:36,682 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 07:57:36,685 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c2354464ba707b12ed00e906f295b105, NAME => 'hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c2354464ba707b12ed00e906f295b105, disabling compactions & flushes 2023-05-31 07:57:36,701 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. after waiting 0 ms 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:36,701 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:36,701 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c2354464ba707b12ed00e906f295b105: 2023-05-31 07:57:36,707 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:57:36,720 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519856709"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519856709"}]},"ts":"1685519856709"} 2023-05-31 07:57:36,742 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:57:36,744 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:57:36,749 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519856745"}]},"ts":"1685519856745"} 2023-05-31 07:57:36,754 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 07:57:36,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2354464ba707b12ed00e906f295b105, ASSIGN}] 2023-05-31 07:57:36,827 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2354464ba707b12ed00e906f295b105, ASSIGN 2023-05-31 07:57:36,829 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c2354464ba707b12ed00e906f295b105, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,33311,1685519854750; forceNewPlan=false, retain=false 2023-05-31 07:57:36,981 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2354464ba707b12ed00e906f295b105, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:36,982 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519856980"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519856980"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519856980"}]},"ts":"1685519856980"} 2023-05-31 07:57:36,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure c2354464ba707b12ed00e906f295b105, server=jenkins-hbase16.apache.org,33311,1685519854750}] 2023-05-31 07:57:37,156 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:37,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2354464ba707b12ed00e906f295b105, NAME => 'hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:57:37,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:37,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,160 INFO [StoreOpener-c2354464ba707b12ed00e906f295b105-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,162 DEBUG [StoreOpener-c2354464ba707b12ed00e906f295b105-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/info 2023-05-31 07:57:37,162 DEBUG [StoreOpener-c2354464ba707b12ed00e906f295b105-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/info 2023-05-31 07:57:37,163 INFO [StoreOpener-c2354464ba707b12ed00e906f295b105-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2354464ba707b12ed00e906f295b105 columnFamilyName info 2023-05-31 07:57:37,163 INFO [StoreOpener-c2354464ba707b12ed00e906f295b105-1] regionserver.HStore(310): Store=c2354464ba707b12ed00e906f295b105/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:37,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,170 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for c2354464ba707b12ed00e906f295b105 2023-05-31 07:57:37,174 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:57:37,174 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened c2354464ba707b12ed00e906f295b105; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=869708, jitterRate=0.10589148104190826}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:57:37,174 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for c2354464ba707b12ed00e906f295b105: 2023-05-31 07:57:37,177 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105., pid=6, masterSystemTime=1685519857148 2023-05-31 07:57:37,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:37,181 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:57:37,182 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2354464ba707b12ed00e906f295b105, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:37,183 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519857182"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519857182"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519857182"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519857182"}]},"ts":"1685519857182"} 2023-05-31 07:57:37,191 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 07:57:37,192 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure c2354464ba707b12ed00e906f295b105, server=jenkins-hbase16.apache.org,33311,1685519854750 in 195 msec 2023-05-31 07:57:37,195 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 07:57:37,196 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c2354464ba707b12ed00e906f295b105, ASSIGN in 369 msec 2023-05-31 07:57:37,197 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:57:37,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519857198"}]},"ts":"1685519857198"} 2023-05-31 07:57:37,202 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 07:57:37,216 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:57:37,219 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 634 msec 2023-05-31 07:57:37,293 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 07:57:37,302 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:57:37,302 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:37,349 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 07:57:37,385 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:57:37,398 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 60 msec 2023-05-31 07:57:37,404 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 07:57:37,426 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:57:37,441 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 35 msec 2023-05-31 07:57:37,464 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 07:57:37,481 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 07:57:37,481 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.632sec 2023-05-31 07:57:37,486 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 07:57:37,488 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 07:57:37,488 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 07:57:37,489 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43657,1685519853629-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 07:57:37,490 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43657,1685519853629-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 07:57:37,499 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ReadOnlyZKClient(139): Connect 0x60daba12 to 127.0.0.1:49338 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:57:37,506 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 07:57:37,511 DEBUG [Listener at localhost.localdomain/36673] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ed6aa5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:57:37,524 DEBUG [hconnection-0x41f548fa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:57:37,537 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:58070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:57:37,550 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:57:37,551 INFO [Listener at localhost.localdomain/36673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:57:37,572 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 07:57:37,573 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:57:37,573 INFO [Listener at localhost.localdomain/36673] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 07:57:37,587 DEBUG [Listener at localhost.localdomain/36673] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 07:57:37,591 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:34170, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 07:57:37,603 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 07:57:37,603 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 07:57:37,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:57:37,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-31 07:57:37,616 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:57:37,619 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:57:37,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] master.MasterRpcServices(697): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-31 07:57:37,626 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:37,628 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9 empty. 2023-05-31 07:57:37,630 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:37,630 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-31 07:57:37,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 07:57:37,664 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 07:57:37,666 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5d8e9725cb628ef536c661398f3f97e9, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/.tmp 2023-05-31 07:57:37,683 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:37,683 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 5d8e9725cb628ef536c661398f3f97e9, disabling compactions & flushes 2023-05-31 07:57:37,683 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:37,683 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:37,683 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. after waiting 0 ms 2023-05-31 07:57:37,683 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:37,684 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:37,684 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:57:37,688 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:57:37,690 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685519857690"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519857690"}]},"ts":"1685519857690"} 2023-05-31 07:57:37,694 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:57:37,695 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:57:37,696 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519857696"}]},"ts":"1685519857696"} 2023-05-31 07:57:37,698 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-31 07:57:37,719 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=5d8e9725cb628ef536c661398f3f97e9, ASSIGN}] 2023-05-31 07:57:37,722 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=5d8e9725cb628ef536c661398f3f97e9, ASSIGN 2023-05-31 07:57:37,725 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=5d8e9725cb628ef536c661398f3f97e9, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,33311,1685519854750; forceNewPlan=false, retain=false 2023-05-31 07:57:37,876 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5d8e9725cb628ef536c661398f3f97e9, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:37,877 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685519857876"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519857876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519857876"}]},"ts":"1685519857876"} 2023-05-31 07:57:37,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 5d8e9725cb628ef536c661398f3f97e9, server=jenkins-hbase16.apache.org,33311,1685519854750}] 2023-05-31 07:57:38,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:38,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d8e9725cb628ef536c661398f3f97e9, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:57:38,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:57:38,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,055 INFO [StoreOpener-5d8e9725cb628ef536c661398f3f97e9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,057 DEBUG [StoreOpener-5d8e9725cb628ef536c661398f3f97e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info 2023-05-31 07:57:38,058 DEBUG [StoreOpener-5d8e9725cb628ef536c661398f3f97e9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info 2023-05-31 07:57:38,059 INFO [StoreOpener-5d8e9725cb628ef536c661398f3f97e9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d8e9725cb628ef536c661398f3f97e9 columnFamilyName info 2023-05-31 07:57:38,060 INFO [StoreOpener-5d8e9725cb628ef536c661398f3f97e9-1] regionserver.HStore(310): Store=5d8e9725cb628ef536c661398f3f97e9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:57:38,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:38,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:57:38,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 5d8e9725cb628ef536c661398f3f97e9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=810031, jitterRate=0.030008777976036072}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:57:38,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:57:38,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9., pid=11, masterSystemTime=1685519858038 2023-05-31 07:57:38,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:38,076 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:38,077 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=5d8e9725cb628ef536c661398f3f97e9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:57:38,077 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685519858077"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519858077"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519858077"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519858077"}]},"ts":"1685519858077"} 2023-05-31 07:57:38,084 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 07:57:38,084 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 5d8e9725cb628ef536c661398f3f97e9, server=jenkins-hbase16.apache.org,33311,1685519854750 in 198 msec 2023-05-31 07:57:38,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 07:57:38,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=5d8e9725cb628ef536c661398f3f97e9, ASSIGN in 365 msec 2023-05-31 07:57:38,089 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:57:38,089 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519858089"}]},"ts":"1685519858089"} 2023-05-31 07:57:38,092 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-31 07:57:38,147 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:57:38,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 538 msec 2023-05-31 07:57:41,877 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-31 07:57:41,975 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 07:57:41,977 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 07:57:41,978 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-31 07:57:43,783 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 07:57:43,784 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-31 07:57:47,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43657] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 07:57:47,647 INFO [Listener at localhost.localdomain/36673] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-31 07:57:47,652 DEBUG [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-31 07:57:47,653 DEBUG [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:57:59,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33311] regionserver.HRegion(9158): Flush requested on 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:57:59,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5d8e9725cb628ef536c661398f3f97e9 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 07:57:59,778 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/b0a78878e6ce41d09133321501d2f0a7 2023-05-31 07:57:59,841 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/b0a78878e6ce41d09133321501d2f0a7 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7 2023-05-31 07:57:59,853 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 07:57:59,855 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 5d8e9725cb628ef536c661398f3f97e9 in 157ms, sequenceid=11, compaction requested=false 2023-05-31 07:57:59,856 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:07,920 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:10,127 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:12,332 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:14,538 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:14,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33311] regionserver.HRegion(9158): Flush requested on 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:58:14,538 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5d8e9725cb628ef536c661398f3f97e9 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 07:58:14,741 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:14,761 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/9ebf7c41d3f84110a595c465431257b6 2023-05-31 07:58:14,772 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/9ebf7c41d3f84110a595c465431257b6 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6 2023-05-31 07:58:14,782 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6, entries=7, sequenceid=21, filesize=12.1 K 2023-05-31 07:58:14,984 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:14,985 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 5d8e9725cb628ef536c661398f3f97e9 in 446ms, sequenceid=21, compaction requested=false 2023-05-31 07:58:14,986 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:14,986 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-31 07:58:14,986 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 07:58:14,989 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7 because midkey is the same as first or last row 2023-05-31 07:58:16,743 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:18,949 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:18,953 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33311%2C1685519854750:(num 1685519856150) roll requested 2023-05-31 07:58:18,953 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 206 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:19,174 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:19,175 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519856150 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519898954 2023-05-31 07:58:19,176 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:19,176 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519856150 is not closed yet, will try archiving it next time 2023-05-31 07:58:28,978 INFO [Listener at localhost.localdomain/36673] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 07:58:33,984 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:33,985 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:33,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33311] regionserver.HRegion(9158): Flush requested on 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:58:33,985 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33311%2C1685519854750:(num 1685519898954) roll requested 2023-05-31 07:58:33,985 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5d8e9725cb628ef536c661398f3f97e9 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 07:58:35,987 INFO [Listener at localhost.localdomain/36673] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 07:58:38,990 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5002 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:38,990 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5002 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:39,003 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:39,003 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK], DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK]] 2023-05-31 07:58:39,006 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519898954 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519913986 2023-05-31 07:58:39,007 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41287,DS-f9e2b882-a85b-417f-ae3b-bc2982149160,DISK], DatanodeInfoWithStorage[127.0.0.1:40851,DS-c2fb8e09-8080-4406-a64c-6a27f8f35b2d,DISK]] 2023-05-31 07:58:39,007 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519898954 is not closed yet, will try archiving it next time 2023-05-31 07:58:39,014 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/37d5da222ced4c7c98b7d0589de6ddae 2023-05-31 07:58:39,024 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/37d5da222ced4c7c98b7d0589de6ddae as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae 2023-05-31 07:58:39,032 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae, entries=7, sequenceid=31, filesize=12.1 K 2023-05-31 07:58:39,035 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 5d8e9725cb628ef536c661398f3f97e9 in 5050ms, sequenceid=31, compaction requested=true 2023-05-31 07:58:39,035 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:39,035 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-31 07:58:39,035 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 07:58:39,036 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7 because midkey is the same as first or last row 2023-05-31 07:58:39,037 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 07:58:39,038 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 07:58:39,042 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 07:58:39,043 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HStore(1912): 5d8e9725cb628ef536c661398f3f97e9/info is initiating minor compaction (all files) 2023-05-31 07:58:39,044 INFO [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5d8e9725cb628ef536c661398f3f97e9/info in TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:39,044 INFO [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae] into tmpdir=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp, totalSize=36.3 K 2023-05-31 07:58:39,045 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] compactions.Compactor(207): Compacting b0a78878e6ce41d09133321501d2f0a7, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685519867659 2023-05-31 07:58:39,046 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] compactions.Compactor(207): Compacting 9ebf7c41d3f84110a595c465431257b6, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685519881700 2023-05-31 07:58:39,047 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] compactions.Compactor(207): Compacting 37d5da222ced4c7c98b7d0589de6ddae, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685519896541 2023-05-31 07:58:39,071 INFO [RS:0;jenkins-hbase16:33311-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5d8e9725cb628ef536c661398f3f97e9#info#compaction#3 average throughput is 21.55 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 07:58:39,091 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/2d8b6260fa35488ab4f6634693265efb as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/2d8b6260fa35488ab4f6634693265efb 2023-05-31 07:58:39,110 INFO [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5d8e9725cb628ef536c661398f3f97e9/info of 5d8e9725cb628ef536c661398f3f97e9 into 2d8b6260fa35488ab4f6634693265efb(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 07:58:39,110 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:39,110 INFO [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9., storeName=5d8e9725cb628ef536c661398f3f97e9/info, priority=13, startTime=1685519919037; duration=0sec 2023-05-31 07:58:39,111 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-31 07:58:39,112 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 07:58:39,112 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/2d8b6260fa35488ab4f6634693265efb because midkey is the same as first or last row 2023-05-31 07:58:39,112 DEBUG [RS:0;jenkins-hbase16:33311-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 07:58:39,419 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519898954 to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/oldWALs/jenkins-hbase16.apache.org%2C33311%2C1685519854750.1685519898954 2023-05-31 07:58:51,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33311] regionserver.HRegion(9158): Flush requested on 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:58:51,124 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5d8e9725cb628ef536c661398f3f97e9 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 07:58:51,151 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/6d187bbd72ae4434bd8bb9c2ff91a515 2023-05-31 07:58:51,162 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/6d187bbd72ae4434bd8bb9c2ff91a515 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/6d187bbd72ae4434bd8bb9c2ff91a515 2023-05-31 07:58:51,170 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/6d187bbd72ae4434bd8bb9c2ff91a515, entries=7, sequenceid=42, filesize=12.1 K 2023-05-31 07:58:51,171 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 5d8e9725cb628ef536c661398f3f97e9 in 47ms, sequenceid=42, compaction requested=false 2023-05-31 07:58:51,171 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:51,172 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-31 07:58:51,172 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 07:58:51,172 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/2d8b6260fa35488ab4f6634693265efb because midkey is the same as first or last row 2023-05-31 07:58:59,145 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 07:58:59,147 INFO [Listener at localhost.localdomain/36673] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 07:58:59,147 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x60daba12 to 127.0.0.1:49338 2023-05-31 07:58:59,147 DEBUG [Listener at localhost.localdomain/36673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:58:59,148 DEBUG [Listener at localhost.localdomain/36673] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 07:58:59,148 DEBUG [Listener at localhost.localdomain/36673] util.JVMClusterUtil(257): Found active master hash=330246149, stopped=false 2023-05-31 07:58:59,149 INFO [Listener at localhost.localdomain/36673] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:58:59,181 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 07:58:59,181 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 07:58:59,182 INFO [Listener at localhost.localdomain/36673] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 07:58:59,182 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:58:59,183 DEBUG [Listener at localhost.localdomain/36673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0065313c to 127.0.0.1:49338 2023-05-31 07:58:59,184 DEBUG [Listener at localhost.localdomain/36673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:58:59,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:58:59,184 INFO [Listener at localhost.localdomain/36673] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,33311,1685519854750' ***** 2023-05-31 07:58:59,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:58:59,184 INFO [Listener at localhost.localdomain/36673] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 07:58:59,185 INFO [RS:0;jenkins-hbase16:33311] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 07:58:59,185 INFO [RS:0;jenkins-hbase16:33311] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 07:58:59,185 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 07:58:59,185 INFO [RS:0;jenkins-hbase16:33311] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 07:58:59,186 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(3303): Received CLOSE for 5d8e9725cb628ef536c661398f3f97e9 2023-05-31 07:58:59,187 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(3303): Received CLOSE for c2354464ba707b12ed00e906f295b105 2023-05-31 07:58:59,187 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:58:59,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 5d8e9725cb628ef536c661398f3f97e9, disabling compactions & flushes 2023-05-31 07:58:59,188 DEBUG [RS:0;jenkins-hbase16:33311] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66c8cee3 to 127.0.0.1:49338 2023-05-31 07:58:59,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:59,188 DEBUG [RS:0;jenkins-hbase16:33311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:58:59,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:59,188 INFO [RS:0;jenkins-hbase16:33311] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 07:58:59,188 INFO [RS:0;jenkins-hbase16:33311] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 07:58:59,188 INFO [RS:0;jenkins-hbase16:33311] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 07:58:59,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. after waiting 0 ms 2023-05-31 07:58:59,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:59,188 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 07:58:59,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 5d8e9725cb628ef536c661398f3f97e9 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-31 07:58:59,189 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 07:58:59,189 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1478): Online Regions={5d8e9725cb628ef536c661398f3f97e9=TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9., 1588230740=hbase:meta,,1.1588230740, c2354464ba707b12ed00e906f295b105=hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105.} 2023-05-31 07:58:59,190 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:58:59,190 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:58:59,190 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:58:59,190 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:58:59,190 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:58:59,190 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-31 07:58:59,191 DEBUG [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1504): Waiting on 1588230740, 5d8e9725cb628ef536c661398f3f97e9, c2354464ba707b12ed00e906f295b105 2023-05-31 07:58:59,216 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/.tmp/info/552f73775d094c3a9d4d17730a1f7483 2023-05-31 07:58:59,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/c2b0a861c35049b1b6bb779ef3da8400 2023-05-31 07:58:59,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/.tmp/info/c2b0a861c35049b1b6bb779ef3da8400 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/c2b0a861c35049b1b6bb779ef3da8400 2023-05-31 07:58:59,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/c2b0a861c35049b1b6bb779ef3da8400, entries=3, sequenceid=48, filesize=7.9 K 2023-05-31 07:58:59,240 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/.tmp/table/d8db283ce21f45b6a138ea7ec50d0da9 2023-05-31 07:58:59,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 5d8e9725cb628ef536c661398f3f97e9 in 54ms, sequenceid=48, compaction requested=true 2023-05-31 07:58:59,244 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae] to archive 2023-05-31 07:58:59,245 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 07:58:59,249 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7 to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/archive/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/b0a78878e6ce41d09133321501d2f0a7 2023-05-31 07:58:59,251 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6 to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/archive/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/9ebf7c41d3f84110a595c465431257b6 2023-05-31 07:58:59,251 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/.tmp/info/552f73775d094c3a9d4d17730a1f7483 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/info/552f73775d094c3a9d4d17730a1f7483 2023-05-31 07:58:59,253 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/archive/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/info/37d5da222ced4c7c98b7d0589de6ddae 2023-05-31 07:58:59,259 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/info/552f73775d094c3a9d4d17730a1f7483, entries=20, sequenceid=14, filesize=7.4 K 2023-05-31 07:58:59,261 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/.tmp/table/d8db283ce21f45b6a138ea7ec50d0da9 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/table/d8db283ce21f45b6a138ea7ec50d0da9 2023-05-31 07:58:59,268 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/table/d8db283ce21f45b6a138ea7ec50d0da9, entries=4, sequenceid=14, filesize=4.8 K 2023-05-31 07:58:59,269 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 79ms, sequenceid=14, compaction requested=false 2023-05-31 07:58:59,279 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 07:58:59,280 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 07:58:59,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 07:58:59,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:58:59,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 07:58:59,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/default/TestLogRolling-testSlowSyncLogRolling/5d8e9725cb628ef536c661398f3f97e9/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-31 07:58:59,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:59,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 5d8e9725cb628ef536c661398f3f97e9: 2023-05-31 07:58:59,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685519857603.5d8e9725cb628ef536c661398f3f97e9. 2023-05-31 07:58:59,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing c2354464ba707b12ed00e906f295b105, disabling compactions & flushes 2023-05-31 07:58:59,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:58:59,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:58:59,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. after waiting 0 ms 2023-05-31 07:58:59,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:58:59,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing c2354464ba707b12ed00e906f295b105 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 07:58:59,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/.tmp/info/3510b239efed41ebafae8e3c6e5a0723 2023-05-31 07:58:59,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/.tmp/info/3510b239efed41ebafae8e3c6e5a0723 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/info/3510b239efed41ebafae8e3c6e5a0723 2023-05-31 07:58:59,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/info/3510b239efed41ebafae8e3c6e5a0723, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 07:58:59,317 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for c2354464ba707b12ed00e906f295b105 in 32ms, sequenceid=6, compaction requested=false 2023-05-31 07:58:59,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/data/hbase/namespace/c2354464ba707b12ed00e906f295b105/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 07:58:59,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:58:59,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for c2354464ba707b12ed00e906f295b105: 2023-05-31 07:58:59,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685519856580.c2354464ba707b12ed00e906f295b105. 2023-05-31 07:58:59,391 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,33311,1685519854750; all regions closed. 2023-05-31 07:58:59,394 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:58:59,407 DEBUG [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/oldWALs 2023-05-31 07:58:59,407 INFO [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C33311%2C1685519854750.meta:.meta(num 1685519856321) 2023-05-31 07:58:59,408 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/WALs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:58:59,418 DEBUG [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/oldWALs 2023-05-31 07:58:59,419 INFO [RS:0;jenkins-hbase16:33311] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C33311%2C1685519854750:(num 1685519913986) 2023-05-31 07:58:59,419 DEBUG [RS:0;jenkins-hbase16:33311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:58:59,419 INFO [RS:0;jenkins-hbase16:33311] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:58:59,419 INFO [RS:0;jenkins-hbase16:33311] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 07:58:59,419 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 07:58:59,420 INFO [RS:0;jenkins-hbase16:33311] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:33311 2023-05-31 07:58:59,435 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:58:59,435 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,33311,1685519854750 2023-05-31 07:58:59,436 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:58:59,436 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,33311,1685519854750] 2023-05-31 07:58:59,437 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,33311,1685519854750; numProcessing=1 2023-05-31 07:58:59,452 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,33311,1685519854750 already deleted, retry=false 2023-05-31 07:58:59,452 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,33311,1685519854750 expired; onlineServers=0 2023-05-31 07:58:59,452 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,43657,1685519853629' ***** 2023-05-31 07:58:59,452 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 07:58:59,453 DEBUG [M:0;jenkins-hbase16:43657] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5292223e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:58:59,453 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:58:59,453 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,43657,1685519853629; all regions closed. 2023-05-31 07:58:59,453 DEBUG [M:0;jenkins-hbase16:43657] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:58:59,453 DEBUG [M:0;jenkins-hbase16:43657] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 07:58:59,453 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 07:58:59,454 DEBUG [M:0;jenkins-hbase16:43657] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 07:58:59,454 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519855807] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519855807,5,FailOnTimeoutGroup] 2023-05-31 07:58:59,456 INFO [M:0;jenkins-hbase16:43657] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 07:58:59,456 INFO [M:0;jenkins-hbase16:43657] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 07:58:59,454 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519855809] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519855809,5,FailOnTimeoutGroup] 2023-05-31 07:58:59,456 INFO [M:0;jenkins-hbase16:43657] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 07:58:59,457 DEBUG [M:0;jenkins-hbase16:43657] master.HMaster(1512): Stopping service threads 2023-05-31 07:58:59,457 INFO [M:0;jenkins-hbase16:43657] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 07:58:59,458 INFO [M:0;jenkins-hbase16:43657] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 07:58:59,459 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 07:58:59,465 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 07:58:59,465 DEBUG [M:0;jenkins-hbase16:43657] zookeeper.ZKUtil(398): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 07:58:59,465 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:58:59,465 WARN [M:0;jenkins-hbase16:43657] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 07:58:59,465 INFO [M:0;jenkins-hbase16:43657] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 07:58:59,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:58:59,466 INFO [M:0;jenkins-hbase16:43657] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 07:58:59,466 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 07:58:59,466 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:58:59,466 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:58:59,466 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 07:58:59,467 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:58:59,467 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-05-31 07:58:59,483 INFO [M:0;jenkins-hbase16:43657] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0a44af4288684dda8e7cb21d9a80e9a8 2023-05-31 07:58:59,488 INFO [M:0;jenkins-hbase16:43657] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a44af4288684dda8e7cb21d9a80e9a8 2023-05-31 07:58:59,490 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0a44af4288684dda8e7cb21d9a80e9a8 as hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0a44af4288684dda8e7cb21d9a80e9a8 2023-05-31 07:58:59,495 INFO [M:0;jenkins-hbase16:43657] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a44af4288684dda8e7cb21d9a80e9a8 2023-05-31 07:58:59,495 INFO [M:0;jenkins-hbase16:43657] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0a44af4288684dda8e7cb21d9a80e9a8, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 07:58:59,496 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=100, compaction requested=false 2023-05-31 07:58:59,497 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:58:59,498 DEBUG [M:0;jenkins-hbase16:43657] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:58:59,498 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/MasterData/WALs/jenkins-hbase16.apache.org,43657,1685519853629 2023-05-31 07:58:59,501 INFO [M:0;jenkins-hbase16:43657] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 07:58:59,501 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 07:58:59,502 INFO [M:0;jenkins-hbase16:43657] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:43657 2023-05-31 07:58:59,511 DEBUG [M:0;jenkins-hbase16:43657] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,43657,1685519853629 already deleted, retry=false 2023-05-31 07:58:59,544 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:58:59,544 INFO [RS:0;jenkins-hbase16:33311] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,33311,1685519854750; zookeeper connection closed. 2023-05-31 07:58:59,544 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): regionserver:33311-0x100803e236d0001, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:58:59,545 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6a2b1c2e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6a2b1c2e 2023-05-31 07:58:59,546 INFO [Listener at localhost.localdomain/36673] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 07:58:59,645 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:58:59,645 INFO [M:0;jenkins-hbase16:43657] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,43657,1685519853629; zookeeper connection closed. 2023-05-31 07:58:59,645 DEBUG [Listener at localhost.localdomain/36673-EventThread] zookeeper.ZKWatcher(600): master:43657-0x100803e236d0000, quorum=127.0.0.1:49338, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:58:59,648 WARN [Listener at localhost.localdomain/36673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:58:59,655 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:58:59,722 WARN [BP-279542273-188.40.62.62-1685519850299 heartbeating to localhost.localdomain/127.0.0.1:43311] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-279542273-188.40.62.62-1685519850299 (Datanode Uuid 8f2f76b6-30b4-4d69-b823-07933661c07b) service to localhost.localdomain/127.0.0.1:43311 2023-05-31 07:58:59,726 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/dfs/data/data3/current/BP-279542273-188.40.62.62-1685519850299] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:58:59,727 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/dfs/data/data4/current/BP-279542273-188.40.62.62-1685519850299] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:58:59,764 WARN [Listener at localhost.localdomain/36673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:58:59,769 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:58:59,876 WARN [BP-279542273-188.40.62.62-1685519850299 heartbeating to localhost.localdomain/127.0.0.1:43311] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:58:59,876 WARN [BP-279542273-188.40.62.62-1685519850299 heartbeating to localhost.localdomain/127.0.0.1:43311] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-279542273-188.40.62.62-1685519850299 (Datanode Uuid 1b5abf8e-013e-40e3-a804-ad6d9d3d6cc0) service to localhost.localdomain/127.0.0.1:43311 2023-05-31 07:58:59,877 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/dfs/data/data1/current/BP-279542273-188.40.62.62-1685519850299] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:58:59,878 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/cluster_37c737ca-02e7-cccb-0e93-2b370c796a68/dfs/data/data2/current/BP-279542273-188.40.62.62-1685519850299] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:58:59,911 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 07:59:00,011 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:59:00,030 INFO [Listener at localhost.localdomain/36673] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 07:59:00,062 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 07:59:00,071 INFO [Listener at localhost.localdomain/36673] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase16:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase16:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase16:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/36673 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:43311 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:43311 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:43311 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:43311 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:43311 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@6f737a12 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=442 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=91 (was 71) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 167), AvailableMemoryMB=8090 (was 8805) 2023-05-31 07:59:00,079 INFO [Listener at localhost.localdomain/36673] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=442, MaxFileDescriptor=60000, SystemLoadAverage=91, ProcessCount=166, AvailableMemoryMB=8090 2023-05-31 07:59:00,079 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/hadoop.log.dir so I do NOT create it in target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a7e2f39c-093c-5542-68c3-5c732cdea5de/hadoop.tmp.dir so I do NOT create it in target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a, deleteOnExit=true 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/test.cache.data in system properties and HBase conf 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 07:59:00,080 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/hadoop.log.dir in system properties and HBase conf 2023-05-31 07:59:00,081 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 07:59:00,081 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 07:59:00,081 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 07:59:00,081 DEBUG [Listener at localhost.localdomain/36673] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 07:59:00,081 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:59:00,081 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:59:00,082 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 07:59:00,083 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/nfs.dump.dir in system properties and HBase conf 2023-05-31 07:59:00,083 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir in system properties and HBase conf 2023-05-31 07:59:00,083 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:59:00,083 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 07:59:00,083 INFO [Listener at localhost.localdomain/36673] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 07:59:00,085 WARN [Listener at localhost.localdomain/36673] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:59:00,087 WARN [Listener at localhost.localdomain/36673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:59:00,087 WARN [Listener at localhost.localdomain/36673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:59:00,338 WARN [Listener at localhost.localdomain/36673] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:00,341 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:00,346 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_localdomain_37773_hdfs____tmdumh/webapp 2023-05-31 07:59:00,419 INFO [Listener at localhost.localdomain/36673] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:37773 2023-05-31 07:59:00,420 WARN [Listener at localhost.localdomain/36673] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:59:00,421 WARN [Listener at localhost.localdomain/36673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:59:00,422 WARN [Listener at localhost.localdomain/36673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:59:00,584 WARN [Listener at localhost.localdomain/38437] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:00,597 WARN [Listener at localhost.localdomain/38437] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:00,600 WARN [Listener at localhost.localdomain/38437] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:00,602 INFO [Listener at localhost.localdomain/38437] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:00,607 INFO [Listener at localhost.localdomain/38437] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_42855_datanode____.cf29a7/webapp 2023-05-31 07:59:00,679 INFO [Listener at localhost.localdomain/38437] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42855 2023-05-31 07:59:00,685 WARN [Listener at localhost.localdomain/46743] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:00,697 WARN [Listener at localhost.localdomain/46743] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:00,699 WARN [Listener at localhost.localdomain/46743] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:00,700 INFO [Listener at localhost.localdomain/46743] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:00,704 INFO [Listener at localhost.localdomain/46743] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_38829_datanode____.8r6e5b/webapp 2023-05-31 07:59:00,776 INFO [Listener at localhost.localdomain/46743] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38829 2023-05-31 07:59:00,783 WARN [Listener at localhost.localdomain/43413] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:01,354 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44f56e378921f817: Processing first storage report for DS-d5740835-14eb-4a13-8d16-743bded2b924 from datanode 4c1d5836-20c1-40a9-9c26-ab5431304c81 2023-05-31 07:59:01,355 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44f56e378921f817: from storage DS-d5740835-14eb-4a13-8d16-743bded2b924 node DatanodeRegistration(127.0.0.1:40483, datanodeUuid=4c1d5836-20c1-40a9-9c26-ab5431304c81, infoPort=32973, infoSecurePort=0, ipcPort=46743, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:01,355 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44f56e378921f817: Processing first storage report for DS-8c4f4b6a-62ae-4eb9-88a2-11a0d6eb6595 from datanode 4c1d5836-20c1-40a9-9c26-ab5431304c81 2023-05-31 07:59:01,355 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44f56e378921f817: from storage DS-8c4f4b6a-62ae-4eb9-88a2-11a0d6eb6595 node DatanodeRegistration(127.0.0.1:40483, datanodeUuid=4c1d5836-20c1-40a9-9c26-ab5431304c81, infoPort=32973, infoSecurePort=0, ipcPort=46743, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:01,439 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x713323b11fd59c37: Processing first storage report for DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31 from datanode 6277fbff-9b25-4167-9ac9-092927692ba5 2023-05-31 07:59:01,439 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x713323b11fd59c37: from storage DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31 node DatanodeRegistration(127.0.0.1:33623, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=42979, infoSecurePort=0, ipcPort=43413, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:01,439 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x713323b11fd59c37: Processing first storage report for DS-ce4f9d28-cfd4-4740-b4c4-f7451b9e922b from datanode 6277fbff-9b25-4167-9ac9-092927692ba5 2023-05-31 07:59:01,439 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x713323b11fd59c37: from storage DS-ce4f9d28-cfd4-4740-b4c4-f7451b9e922b node DatanodeRegistration(127.0.0.1:33623, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=42979, infoSecurePort=0, ipcPort=43413, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:01,498 DEBUG [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b 2023-05-31 07:59:01,504 INFO [Listener at localhost.localdomain/43413] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/zookeeper_0, clientPort=51691, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 07:59:01,506 INFO [Listener at localhost.localdomain/43413] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51691 2023-05-31 07:59:01,506 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,507 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,526 INFO [Listener at localhost.localdomain/43413] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff with version=8 2023-05-31 07:59:01,527 INFO [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 07:59:01,529 INFO [Listener at localhost.localdomain/43413] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:59:01,530 INFO [Listener at localhost.localdomain/43413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:59:01,532 INFO [Listener at localhost.localdomain/43413] ipc.NettyRpcServer(120): Bind to /188.40.62.62:33919 2023-05-31 07:59:01,533 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,534 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,535 INFO [Listener at localhost.localdomain/43413] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33919 connecting to ZooKeeper ensemble=127.0.0.1:51691 2023-05-31 07:59:01,576 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:339190x0, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:59:01,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33919-0x100803f7dc40000 connected 2023-05-31 07:59:01,661 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:59:01,662 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:01,663 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:59:01,664 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33919 2023-05-31 07:59:01,664 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33919 2023-05-31 07:59:01,665 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33919 2023-05-31 07:59:01,666 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33919 2023-05-31 07:59:01,667 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33919 2023-05-31 07:59:01,667 INFO [Listener at localhost.localdomain/43413] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff, hbase.cluster.distributed=false 2023-05-31 07:59:01,681 INFO [Listener at localhost.localdomain/43413] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:59:01,681 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,681 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,681 INFO [Listener at localhost.localdomain/43413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:59:01,681 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:01,682 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:59:01,682 INFO [Listener at localhost.localdomain/43413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:59:01,683 INFO [Listener at localhost.localdomain/43413] ipc.NettyRpcServer(120): Bind to /188.40.62.62:35401 2023-05-31 07:59:01,683 INFO [Listener at localhost.localdomain/43413] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 07:59:01,684 DEBUG [Listener at localhost.localdomain/43413] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 07:59:01,685 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,686 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,687 INFO [Listener at localhost.localdomain/43413] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35401 connecting to ZooKeeper ensemble=127.0.0.1:51691 2023-05-31 07:59:01,698 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:354010x0, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:59:01,699 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): regionserver:354010x0, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:59:01,699 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35401-0x100803f7dc40001 connected 2023-05-31 07:59:01,700 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:01,700 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:59:01,701 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35401 2023-05-31 07:59:01,701 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35401 2023-05-31 07:59:01,701 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35401 2023-05-31 07:59:01,701 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35401 2023-05-31 07:59:01,701 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35401 2023-05-31 07:59:01,702 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,710 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:59:01,711 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,719 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:59:01,719 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:59:01,719 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,720 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:59:01,721 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,33919,1685519941529 from backup master directory 2023-05-31 07:59:01,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:59:01,731 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,731 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:59:01,731 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,731 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:59:01,751 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/hbase.id with ID: ffd860f5-9b32-400f-aa09-564970ed0d28 2023-05-31 07:59:01,766 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:01,777 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,794 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x69cff0dd to 127.0.0.1:51691 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:01,807 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66c702b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:01,807 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:01,808 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 07:59:01,809 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:01,811 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store-tmp 2023-05-31 07:59:01,824 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:01,824 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 07:59:01,824 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:01,824 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:01,824 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 07:59:01,824 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:01,824 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:01,825 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:59:01,825 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,829 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C33919%2C1685519941529, suffix=, logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529, archiveDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs, maxLogs=10 2023-05-31 07:59:01,839 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 2023-05-31 07:59:01,839 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] 2023-05-31 07:59:01,839 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:01,839 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:01,839 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,839 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,842 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,845 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 07:59:01,846 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 07:59:01,846 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:01,848 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,849 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,854 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:01,857 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:01,858 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=855376, jitterRate=0.08766692876815796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:01,858 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:59:01,858 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 07:59:01,860 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 07:59:01,860 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 07:59:01,860 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 07:59:01,861 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 07:59:01,862 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 07:59:01,862 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 07:59:01,864 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 07:59:01,865 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 07:59:01,875 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 07:59:01,875 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 07:59:01,876 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 07:59:01,876 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 07:59:01,876 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 07:59:01,885 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,886 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 07:59:01,887 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 07:59:01,888 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 07:59:01,898 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:01,898 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:01,898 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,899 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,33919,1685519941529, sessionid=0x100803f7dc40000, setting cluster-up flag (Was=false) 2023-05-31 07:59:01,919 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,944 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 07:59:01,945 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,964 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:01,989 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 07:59:01,990 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:01,991 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.hbase-snapshot/.tmp 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:59:01,994 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:01,995 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685519971995 2023-05-31 07:59:01,995 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:01,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 07:59:01,997 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 07:59:01,997 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 07:59:01,997 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519941997,5,FailOnTimeoutGroup] 2023-05-31 07:59:01,997 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519941997,5,FailOnTimeoutGroup] 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:01,997 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:01,998 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:02,003 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(951): ClusterId : ffd860f5-9b32-400f-aa09-564970ed0d28 2023-05-31 07:59:02,003 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 07:59:02,015 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 07:59:02,015 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 07:59:02,015 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:02,017 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:02,017 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff 2023-05-31 07:59:02,023 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 07:59:02,027 DEBUG [RS:0;jenkins-hbase16:35401] zookeeper.ReadOnlyZKClient(139): Connect 0x733e0b19 to 127.0.0.1:51691 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:02,035 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:02,037 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:59:02,038 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/info 2023-05-31 07:59:02,039 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:59:02,039 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,040 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:59:02,041 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:02,041 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:59:02,042 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,042 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:59:02,044 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/table 2023-05-31 07:59:02,044 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:59:02,045 DEBUG [RS:0;jenkins-hbase16:35401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7df9fb0e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:02,046 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,046 DEBUG [RS:0;jenkins-hbase16:35401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79c66069, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:59:02,048 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740 2023-05-31 07:59:02,048 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740 2023-05-31 07:59:02,051 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:59:02,052 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:59:02,054 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:02,055 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=869091, jitterRate=0.10510729253292084}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:59:02,055 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:59:02,055 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:59:02,055 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:59:02,055 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:59:02,055 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:59:02,055 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:59:02,055 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 07:59:02,056 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:59:02,057 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:02,057 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 07:59:02,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 07:59:02,059 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:35401 2023-05-31 07:59:02,059 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 07:59:02,059 INFO [RS:0;jenkins-hbase16:35401] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 07:59:02,059 INFO [RS:0;jenkins-hbase16:35401] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 07:59:02,060 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 07:59:02,061 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 07:59:02,061 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,33919,1685519941529 with isa=jenkins-hbase16.apache.org/188.40.62.62:35401, startcode=1685519941680 2023-05-31 07:59:02,061 DEBUG [RS:0;jenkins-hbase16:35401] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 07:59:02,065 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:56311, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 07:59:02,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,067 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff 2023-05-31 07:59:02,067 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38437 2023-05-31 07:59:02,067 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 07:59:02,077 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:02,078 DEBUG [RS:0;jenkins-hbase16:35401] zookeeper.ZKUtil(162): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,078 WARN [RS:0;jenkins-hbase16:35401] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:59:02,078 INFO [RS:0;jenkins-hbase16:35401] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:02,078 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,079 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,35401,1685519941680] 2023-05-31 07:59:02,082 DEBUG [RS:0;jenkins-hbase16:35401] zookeeper.ZKUtil(162): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,083 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 07:59:02,084 INFO [RS:0;jenkins-hbase16:35401] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 07:59:02,086 INFO [RS:0;jenkins-hbase16:35401] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 07:59:02,087 INFO [RS:0;jenkins-hbase16:35401] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 07:59:02,087 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,087 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 07:59:02,088 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,089 DEBUG [RS:0;jenkins-hbase16:35401] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:02,090 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,090 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,090 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,099 INFO [RS:0;jenkins-hbase16:35401] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 07:59:02,099 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,35401,1685519941680-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,110 INFO [RS:0;jenkins-hbase16:35401] regionserver.Replication(203): jenkins-hbase16.apache.org,35401,1685519941680 started 2023-05-31 07:59:02,110 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,35401,1685519941680, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:35401, sessionid=0x100803f7dc40001 2023-05-31 07:59:02,110 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 07:59:02,110 DEBUG [RS:0;jenkins-hbase16:35401] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,110 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,35401,1685519941680' 2023-05-31 07:59:02,110 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,35401,1685519941680' 2023-05-31 07:59:02,111 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 07:59:02,112 DEBUG [RS:0;jenkins-hbase16:35401] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 07:59:02,112 DEBUG [RS:0;jenkins-hbase16:35401] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 07:59:02,112 INFO [RS:0;jenkins-hbase16:35401] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 07:59:02,112 INFO [RS:0;jenkins-hbase16:35401] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 07:59:02,211 DEBUG [jenkins-hbase16:33919] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 07:59:02,212 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,35401,1685519941680, state=OPENING 2023-05-31 07:59:02,215 INFO [RS:0;jenkins-hbase16:35401] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C35401%2C1685519941680, suffix=, logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680, archiveDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs, maxLogs=32 2023-05-31 07:59:02,223 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 07:59:02,231 INFO [RS:0;jenkins-hbase16:35401] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519942218 2023-05-31 07:59:02,231 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:02,231 DEBUG [RS:0;jenkins-hbase16:35401] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:02,231 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,35401,1685519941680}] 2023-05-31 07:59:02,232 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:59:02,386 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,387 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 07:59:02,390 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:49746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 07:59:02,396 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 07:59:02,396 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:02,400 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680, archiveDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs, maxLogs=32 2023-05-31 07:59:02,415 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519942403.meta 2023-05-31 07:59:02,415 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:02,415 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:02,416 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 07:59:02,416 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 07:59:02,417 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 07:59:02,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 07:59:02,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:02,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 07:59:02,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 07:59:02,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:59:02,420 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/info 2023-05-31 07:59:02,420 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/info 2023-05-31 07:59:02,421 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:59:02,421 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,422 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:59:02,423 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:02,423 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:02,424 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:59:02,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:59:02,426 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/table 2023-05-31 07:59:02,426 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740/table 2023-05-31 07:59:02,428 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:59:02,429 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,430 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740 2023-05-31 07:59:02,432 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/meta/1588230740 2023-05-31 07:59:02,435 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:59:02,437 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:59:02,439 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=746877, jitterRate=-0.050297126173973083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:59:02,440 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:59:02,442 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685519942386 2023-05-31 07:59:02,446 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 07:59:02,447 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 07:59:02,447 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,35401,1685519941680, state=OPEN 2023-05-31 07:59:02,456 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 07:59:02,456 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:59:02,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 07:59:02,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,35401,1685519941680 in 225 msec 2023-05-31 07:59:02,464 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 07:59:02,464 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 403 msec 2023-05-31 07:59:02,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 473 msec 2023-05-31 07:59:02,468 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685519942468, completionTime=-1 2023-05-31 07:59:02,468 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 07:59:02,468 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 07:59:02,472 DEBUG [hconnection-0xf341c32-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:59:02,474 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:49748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:59:02,476 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 07:59:02,476 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685520002476 2023-05-31 07:59:02,476 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685520062476 2023-05-31 07:59:02,476 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-31 07:59:02,498 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33919,1685519941529-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,498 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33919,1685519941529-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,498 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33919,1685519941529-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,498 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:33919, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,498 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:02,499 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 07:59:02,499 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:02,500 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 07:59:02,500 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 07:59:02,503 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:59:02,504 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:59:02,507 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,507 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779 empty. 2023-05-31 07:59:02,508 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,508 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 07:59:02,525 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:02,527 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e8a8980874fa353f9ac8dd21156ef779, NAME => 'hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp 2023-05-31 07:59:02,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:02,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e8a8980874fa353f9ac8dd21156ef779, disabling compactions & flushes 2023-05-31 07:59:02,537 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. after waiting 0 ms 2023-05-31 07:59:02,537 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,538 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,538 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e8a8980874fa353f9ac8dd21156ef779: 2023-05-31 07:59:02,541 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:59:02,542 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519942542"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519942542"}]},"ts":"1685519942542"} 2023-05-31 07:59:02,545 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:59:02,546 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:59:02,546 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519942546"}]},"ts":"1685519942546"} 2023-05-31 07:59:02,548 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 07:59:02,586 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e8a8980874fa353f9ac8dd21156ef779, ASSIGN}] 2023-05-31 07:59:02,589 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e8a8980874fa353f9ac8dd21156ef779, ASSIGN 2023-05-31 07:59:02,590 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e8a8980874fa353f9ac8dd21156ef779, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,35401,1685519941680; forceNewPlan=false, retain=false 2023-05-31 07:59:02,742 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e8a8980874fa353f9ac8dd21156ef779, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,743 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519942742"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519942742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519942742"}]},"ts":"1685519942742"} 2023-05-31 07:59:02,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e8a8980874fa353f9ac8dd21156ef779, server=jenkins-hbase16.apache.org,35401,1685519941680}] 2023-05-31 07:59:02,912 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e8a8980874fa353f9ac8dd21156ef779, NAME => 'hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:02,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,913 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:02,913 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,913 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,915 INFO [StoreOpener-e8a8980874fa353f9ac8dd21156ef779-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,917 DEBUG [StoreOpener-e8a8980874fa353f9ac8dd21156ef779-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779/info 2023-05-31 07:59:02,917 DEBUG [StoreOpener-e8a8980874fa353f9ac8dd21156ef779-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779/info 2023-05-31 07:59:02,917 INFO [StoreOpener-e8a8980874fa353f9ac8dd21156ef779-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e8a8980874fa353f9ac8dd21156ef779 columnFamilyName info 2023-05-31 07:59:02,918 INFO [StoreOpener-e8a8980874fa353f9ac8dd21156ef779-1] regionserver.HStore(310): Store=e8a8980874fa353f9ac8dd21156ef779/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:02,920 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:02,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/hbase/namespace/e8a8980874fa353f9ac8dd21156ef779/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:02,930 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened e8a8980874fa353f9ac8dd21156ef779; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=861869, jitterRate=0.0959242731332779}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:02,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for e8a8980874fa353f9ac8dd21156ef779: 2023-05-31 07:59:02,931 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779., pid=6, masterSystemTime=1685519942904 2023-05-31 07:59:02,934 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,934 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:02,935 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e8a8980874fa353f9ac8dd21156ef779, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:02,935 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519942935"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519942935"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519942935"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519942935"}]},"ts":"1685519942935"} 2023-05-31 07:59:02,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 07:59:02,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e8a8980874fa353f9ac8dd21156ef779, server=jenkins-hbase16.apache.org,35401,1685519941680 in 187 msec 2023-05-31 07:59:02,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 07:59:02,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e8a8980874fa353f9ac8dd21156ef779, ASSIGN in 354 msec 2023-05-31 07:59:02,944 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:59:02,944 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519942944"}]},"ts":"1685519942944"} 2023-05-31 07:59:02,946 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 07:59:02,957 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:59:02,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 458 msec 2023-05-31 07:59:03,002 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 07:59:03,039 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:03,040 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:03,049 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 07:59:03,069 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:03,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-05-31 07:59:03,091 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 07:59:03,110 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:03,126 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 32 msec 2023-05-31 07:59:03,152 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 07:59:03,169 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 07:59:03,169 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.438sec 2023-05-31 07:59:03,169 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 07:59:03,169 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 07:59:03,170 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 07:59:03,170 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33919,1685519941529-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 07:59:03,170 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33919,1685519941529-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 07:59:03,175 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 07:59:03,204 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ReadOnlyZKClient(139): Connect 0x71be2718 to 127.0.0.1:51691 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:03,220 DEBUG [Listener at localhost.localdomain/43413] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ae6ab01, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:03,222 DEBUG [hconnection-0x472b7622-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:59:03,224 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:49752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:59:03,226 INFO [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:03,227 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:03,244 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 07:59:03,244 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:03,246 INFO [Listener at localhost.localdomain/43413] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 07:59:03,260 INFO [Listener at localhost.localdomain/43413] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:59:03,260 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:03,261 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:03,261 INFO [Listener at localhost.localdomain/43413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:59:03,261 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:03,261 INFO [Listener at localhost.localdomain/43413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:59:03,261 INFO [Listener at localhost.localdomain/43413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:59:03,262 INFO [Listener at localhost.localdomain/43413] ipc.NettyRpcServer(120): Bind to /188.40.62.62:32895 2023-05-31 07:59:03,262 INFO [Listener at localhost.localdomain/43413] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 07:59:03,263 DEBUG [Listener at localhost.localdomain/43413] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 07:59:03,264 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:03,264 INFO [Listener at localhost.localdomain/43413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:03,265 INFO [Listener at localhost.localdomain/43413] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32895 connecting to ZooKeeper ensemble=127.0.0.1:51691 2023-05-31 07:59:03,277 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:328950x0, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:59:03,278 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(162): regionserver:328950x0, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:59:03,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32895-0x100803f7dc40005 connected 2023-05-31 07:59:03,280 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(162): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-31 07:59:03,281 DEBUG [Listener at localhost.localdomain/43413] zookeeper.ZKUtil(164): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:59:03,281 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32895 2023-05-31 07:59:03,282 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32895 2023-05-31 07:59:03,282 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32895 2023-05-31 07:59:03,286 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32895 2023-05-31 07:59:03,286 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32895 2023-05-31 07:59:03,289 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(951): ClusterId : ffd860f5-9b32-400f-aa09-564970ed0d28 2023-05-31 07:59:03,290 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 07:59:03,298 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 07:59:03,298 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 07:59:03,307 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 07:59:03,308 DEBUG [RS:1;jenkins-hbase16:32895] zookeeper.ReadOnlyZKClient(139): Connect 0x4c8305c8 to 127.0.0.1:51691 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:03,320 DEBUG [RS:1;jenkins-hbase16:32895] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4244f228, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:03,320 DEBUG [RS:1;jenkins-hbase16:32895] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@467b6eb8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:59:03,330 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase16:32895 2023-05-31 07:59:03,331 INFO [RS:1;jenkins-hbase16:32895] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 07:59:03,331 INFO [RS:1;jenkins-hbase16:32895] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 07:59:03,331 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 07:59:03,332 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,33919,1685519941529 with isa=jenkins-hbase16.apache.org/188.40.62.62:32895, startcode=1685519943260 2023-05-31 07:59:03,332 DEBUG [RS:1;jenkins-hbase16:32895] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 07:59:03,335 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:43505, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 07:59:03,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,336 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff 2023-05-31 07:59:03,336 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38437 2023-05-31 07:59:03,336 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 07:59:03,344 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:03,344 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:03,344 DEBUG [RS:1;jenkins-hbase16:32895] zookeeper.ZKUtil(162): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,344 WARN [RS:1;jenkins-hbase16:32895] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:59:03,344 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,32895,1685519943260] 2023-05-31 07:59:03,345 INFO [RS:1;jenkins-hbase16:32895] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:03,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,345 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:03,351 DEBUG [RS:1;jenkins-hbase16:32895] zookeeper.ZKUtil(162): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,351 DEBUG [RS:1;jenkins-hbase16:32895] zookeeper.ZKUtil(162): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:03,352 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 07:59:03,353 INFO [RS:1;jenkins-hbase16:32895] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 07:59:03,356 INFO [RS:1;jenkins-hbase16:32895] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 07:59:03,357 INFO [RS:1;jenkins-hbase16:32895] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 07:59:03,357 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,357 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 07:59:03,359 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,359 DEBUG [RS:1;jenkins-hbase16:32895] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:03,360 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,361 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,361 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,370 INFO [RS:1;jenkins-hbase16:32895] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 07:59:03,371 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,32895,1685519943260-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:03,380 INFO [RS:1;jenkins-hbase16:32895] regionserver.Replication(203): jenkins-hbase16.apache.org,32895,1685519943260 started 2023-05-31 07:59:03,380 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,32895,1685519943260, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:32895, sessionid=0x100803f7dc40005 2023-05-31 07:59:03,380 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 07:59:03,380 INFO [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase16:32895,5,FailOnTimeoutGroup] 2023-05-31 07:59:03,380 DEBUG [RS:1;jenkins-hbase16:32895] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,381 INFO [Listener at localhost.localdomain/43413] wal.TestLogRolling(323): Replication=2 2023-05-31 07:59:03,381 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,32895,1685519943260' 2023-05-31 07:59:03,381 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 07:59:03,382 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 07:59:03,382 DEBUG [Listener at localhost.localdomain/43413] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 07:59:03,383 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 07:59:03,383 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 07:59:03,383 DEBUG [RS:1;jenkins-hbase16:32895] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,383 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,32895,1685519943260' 2023-05-31 07:59:03,383 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 07:59:03,384 DEBUG [RS:1;jenkins-hbase16:32895] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 07:59:03,384 DEBUG [RS:1;jenkins-hbase16:32895] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 07:59:03,385 INFO [RS:1;jenkins-hbase16:32895] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 07:59:03,385 INFO [RS:1;jenkins-hbase16:32895] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 07:59:03,386 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:57416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 07:59:03,387 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 07:59:03,388 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 07:59:03,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:03,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 07:59:03,391 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:59:03,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.MasterRpcServices(697): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-31 07:59:03,392 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:59:03,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 07:59:03,394 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,395 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56 empty. 2023-05-31 07:59:03,395 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,395 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-31 07:59:03,411 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:03,413 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0a38e21359722b2f9c82783be0a70a56, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/.tmp 2023-05-31 07:59:03,422 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:03,423 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 0a38e21359722b2f9c82783be0a70a56, disabling compactions & flushes 2023-05-31 07:59:03,423 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,423 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,423 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. after waiting 0 ms 2023-05-31 07:59:03,423 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,423 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,423 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 0a38e21359722b2f9c82783be0a70a56: 2023-05-31 07:59:03,427 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:59:03,430 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685519943429"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519943429"}]},"ts":"1685519943429"} 2023-05-31 07:59:03,432 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:59:03,433 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:59:03,434 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519943433"}]},"ts":"1685519943433"} 2023-05-31 07:59:03,436 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-31 07:59:03,463 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase16.apache.org=0} racks are {/default-rack=0} 2023-05-31 07:59:03,466 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-31 07:59:03,466 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-31 07:59:03,466 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-31 07:59:03,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a38e21359722b2f9c82783be0a70a56, ASSIGN}] 2023-05-31 07:59:03,470 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a38e21359722b2f9c82783be0a70a56, ASSIGN 2023-05-31 07:59:03,471 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a38e21359722b2f9c82783be0a70a56, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,32895,1685519943260; forceNewPlan=false, retain=false 2023-05-31 07:59:03,489 INFO [RS:1;jenkins-hbase16:32895] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C32895%2C1685519943260, suffix=, logDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260, archiveDir=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs, maxLogs=32 2023-05-31 07:59:03,507 INFO [RS:1;jenkins-hbase16:32895] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 2023-05-31 07:59:03,508 DEBUG [RS:1;jenkins-hbase16:32895] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] 2023-05-31 07:59:03,628 INFO [jenkins-hbase16:33919] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-31 07:59:03,630 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0a38e21359722b2f9c82783be0a70a56, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,630 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685519943630"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519943630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519943630"}]},"ts":"1685519943630"} 2023-05-31 07:59:03,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 0a38e21359722b2f9c82783be0a70a56, server=jenkins-hbase16.apache.org,32895,1685519943260}] 2023-05-31 07:59:03,788 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,789 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 07:59:03,796 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51132, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 07:59:03,805 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a38e21359722b2f9c82783be0a70a56, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:03,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:03,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,808 INFO [StoreOpener-0a38e21359722b2f9c82783be0a70a56-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,809 DEBUG [StoreOpener-0a38e21359722b2f9c82783be0a70a56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info 2023-05-31 07:59:03,810 DEBUG [StoreOpener-0a38e21359722b2f9c82783be0a70a56-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info 2023-05-31 07:59:03,810 INFO [StoreOpener-0a38e21359722b2f9c82783be0a70a56-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a38e21359722b2f9c82783be0a70a56 columnFamilyName info 2023-05-31 07:59:03,811 INFO [StoreOpener-0a38e21359722b2f9c82783be0a70a56-1] regionserver.HStore(310): Store=0a38e21359722b2f9c82783be0a70a56/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:03,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:03,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:03,824 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 0a38e21359722b2f9c82783be0a70a56; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=837524, jitterRate=0.06496721506118774}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:03,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 0a38e21359722b2f9c82783be0a70a56: 2023-05-31 07:59:03,826 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56., pid=11, masterSystemTime=1685519943788 2023-05-31 07:59:03,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:03,833 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=0a38e21359722b2f9c82783be0a70a56, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:03,833 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685519943833"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519943833"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519943833"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519943833"}]},"ts":"1685519943833"} 2023-05-31 07:59:03,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 07:59:03,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 0a38e21359722b2f9c82783be0a70a56, server=jenkins-hbase16.apache.org,32895,1685519943260 in 203 msec 2023-05-31 07:59:03,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 07:59:03,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=0a38e21359722b2f9c82783be0a70a56, ASSIGN in 372 msec 2023-05-31 07:59:03,843 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:59:03,843 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519943843"}]},"ts":"1685519943843"} 2023-05-31 07:59:03,845 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-31 07:59:03,857 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:59:03,860 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 469 msec 2023-05-31 07:59:04,825 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 07:59:08,084 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 07:59:08,084 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 07:59:09,353 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-31 07:59:13,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 07:59:13,397 INFO [Listener at localhost.localdomain/43413] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-31 07:59:13,403 DEBUG [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-31 07:59:13,403 DEBUG [Listener at localhost.localdomain/43413] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:13,420 WARN [Listener at localhost.localdomain/43413] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:13,425 WARN [Listener at localhost.localdomain/43413] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:13,427 INFO [Listener at localhost.localdomain/43413] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:13,432 INFO [Listener at localhost.localdomain/43413] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_42279_datanode____ey767x/webapp 2023-05-31 07:59:13,511 INFO [Listener at localhost.localdomain/43413] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42279 2023-05-31 07:59:13,521 WARN [Listener at localhost.localdomain/40615] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:13,591 WARN [Listener at localhost.localdomain/40615] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:13,594 WARN [Listener at localhost.localdomain/40615] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:13,596 INFO [Listener at localhost.localdomain/40615] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:13,601 INFO [Listener at localhost.localdomain/40615] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_41303_datanode____.63fcn4/webapp 2023-05-31 07:59:13,673 INFO [Listener at localhost.localdomain/40615] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41303 2023-05-31 07:59:13,682 WARN [Listener at localhost.localdomain/39135] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:13,694 WARN [Listener at localhost.localdomain/39135] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:13,696 WARN [Listener at localhost.localdomain/39135] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:13,698 INFO [Listener at localhost.localdomain/39135] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:13,701 INFO [Listener at localhost.localdomain/39135] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_44935_datanode____cckite/webapp 2023-05-31 07:59:13,776 INFO [Listener at localhost.localdomain/39135] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44935 2023-05-31 07:59:13,786 WARN [Listener at localhost.localdomain/36389] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:14,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4740d3a6b761d0f2: Processing first storage report for DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d from datanode f05f79a8-d47c-45c0-a369-3e21977298bf 2023-05-31 07:59:14,411 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4740d3a6b761d0f2: from storage DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d node DatanodeRegistration(127.0.0.1:42741, datanodeUuid=f05f79a8-d47c-45c0-a369-3e21977298bf, infoPort=44445, infoSecurePort=0, ipcPort=40615, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,411 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4740d3a6b761d0f2: Processing first storage report for DS-e8354ae4-f3aa-456f-998d-1b3f15992bf2 from datanode f05f79a8-d47c-45c0-a369-3e21977298bf 2023-05-31 07:59:14,411 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4740d3a6b761d0f2: from storage DS-e8354ae4-f3aa-456f-998d-1b3f15992bf2 node DatanodeRegistration(127.0.0.1:42741, datanodeUuid=f05f79a8-d47c-45c0-a369-3e21977298bf, infoPort=44445, infoSecurePort=0, ipcPort=40615, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,576 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x199694c23e76ca9e: Processing first storage report for DS-c25ab37c-1550-4f98-a2de-73dc094267bb from datanode 89651c44-e03a-4d19-a740-be7cc6df83a2 2023-05-31 07:59:14,577 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x199694c23e76ca9e: from storage DS-c25ab37c-1550-4f98-a2de-73dc094267bb node DatanodeRegistration(127.0.0.1:43601, datanodeUuid=89651c44-e03a-4d19-a740-be7cc6df83a2, infoPort=46259, infoSecurePort=0, ipcPort=39135, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,577 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x199694c23e76ca9e: Processing first storage report for DS-4dc13e83-bcb4-480e-abaa-0124ec865888 from datanode 89651c44-e03a-4d19-a740-be7cc6df83a2 2023-05-31 07:59:14,577 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x199694c23e76ca9e: from storage DS-4dc13e83-bcb4-480e-abaa-0124ec865888 node DatanodeRegistration(127.0.0.1:43601, datanodeUuid=89651c44-e03a-4d19-a740-be7cc6df83a2, infoPort=46259, infoSecurePort=0, ipcPort=39135, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,664 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdf3093f42f83297f: Processing first storage report for DS-69632975-7602-4370-8711-4365abf9392e from datanode 85d37d2d-f6dd-4f1a-9dfd-e74735a2e3a2 2023-05-31 07:59:14,664 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdf3093f42f83297f: from storage DS-69632975-7602-4370-8711-4365abf9392e node DatanodeRegistration(127.0.0.1:40485, datanodeUuid=85d37d2d-f6dd-4f1a-9dfd-e74735a2e3a2, infoPort=44991, infoSecurePort=0, ipcPort=36389, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,664 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdf3093f42f83297f: Processing first storage report for DS-7123202b-7c74-4d1e-9844-f5e17c7f3bca from datanode 85d37d2d-f6dd-4f1a-9dfd-e74735a2e3a2 2023-05-31 07:59:14,664 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdf3093f42f83297f: from storage DS-7123202b-7c74-4d1e-9844-f5e17c7f3bca node DatanodeRegistration(127.0.0.1:40485, datanodeUuid=85d37d2d-f6dd-4f1a-9dfd-e74735a2e3a2, infoPort=44991, infoSecurePort=0, ipcPort=36389, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:14,704 WARN [Listener at localhost.localdomain/36389] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:14,706 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,708 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 07:59:14,707 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 07:59:14,706 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,708 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519942218 block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]) is bad. 2023-05-31 07:59:14,708 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519942403.meta block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK], DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]) is bad. 2023-05-31 07:59:14,708 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33623]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,708 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]) is bad. 2023-05-31 07:59:14,708 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]) is bad. 2023-05-31 07:59:14,708 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33623]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,715 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:32866 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32866 dst: /127.0.0.1:40483 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,715 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:32870 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32870 dst: /127.0.0.1:40483 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,715 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:14,719 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:32914 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32914 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40483 remote=/127.0.0.1:32914]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,719 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:32844 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32844 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40483 remote=/127.0.0.1:32844]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,719 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40483]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,719 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40483]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,722 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:42626 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:33623:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42626 dst: /127.0.0.1:33623 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,723 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:42548 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33623:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42548 dst: /127.0.0.1:33623 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:42578 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33623:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42578 dst: /127.0.0.1:33623 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,821 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:42570 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33623:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:42570 dst: /127.0.0.1:33623 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,821 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:14,822 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid 6277fbff-9b25-4167-9ac9-092927692ba5) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:14,823 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data3/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:14,823 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data4/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:14,825 WARN [Listener at localhost.localdomain/36389] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:14,825 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,826 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,825 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,825 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:14,833 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:14,939 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:36928 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36928 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,940 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:36922 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36922 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,940 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:36924 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36924 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,940 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:36952 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:40483:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:36952 dst: /127.0.0.1:40483 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:14,943 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:14,943 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid 4c1d5836-20c1-40a9-9c26-ab5431304c81) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:14,944 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data1/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:14,945 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data2/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:14,950 DEBUG [Listener at localhost.localdomain/36389] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:59:14,952 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:35210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:59:14,953 WARN [RS:1;jenkins-hbase16:32895.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:14,954 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C32895%2C1685519943260:(num 1685519943491) roll requested 2023-05-31 07:59:14,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32895] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:14,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32895] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:35210 deadline: 1685519964952, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 07:59:14,965 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-31 07:59:14,965 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 with entries=1, filesize=467 B; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 2023-05-31 07:59:14,966 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK], DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK]] 2023-05-31 07:59:14,966 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 is not closed yet, will try archiving it next time 2023-05-31 07:59:14,966 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:14,967 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:14,967 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519943491 2023-05-31 07:59:27,007 INFO [Listener at localhost.localdomain/36389] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 2023-05-31 07:59:27,007 WARN [Listener at localhost.localdomain/36389] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:27,008 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:27,008 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 block BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK], DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK]) is bad. 2023-05-31 07:59:27,012 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:27,017 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:57012 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:42741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57012 dst: /127.0.0.1:42741 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:42741 remote=/127.0.0.1:57012]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:27,018 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:42741]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:27,019 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:60894 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:40485:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60894 dst: /127.0.0.1:40485 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:27,120 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:27,120 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid 85d37d2d-f6dd-4f1a-9dfd-e74735a2e3a2) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:27,121 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data9/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:27,121 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data10/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:27,129 WARN [sync.3] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK]] 2023-05-31 07:59:27,129 WARN [sync.3] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK]] 2023-05-31 07:59:27,130 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C32895%2C1685519943260:(num 1685519954954) roll requested 2023-05-31 07:59:27,135 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741840_1021 2023-05-31 07:59:27,139 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK] 2023-05-31 07:59:27,144 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741841_1022 2023-05-31 07:59:27,145 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:27,149 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741842_1023 2023-05-31 07:59:27,149 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] 2023-05-31 07:59:27,155 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519967130 2023-05-31 07:59:27,155 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:27,155 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 is not closed yet, will try archiving it next time 2023-05-31 07:59:31,140 WARN [Listener at localhost.localdomain/36389] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:31,143 WARN [ResponseProcessor for block BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 07:59:31,145 WARN [DataStreamer for file /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519967130 block BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024] hdfs.DataStreamer(1548): Error Recovery for BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK]) is bad. 2023-05-31 07:59:31,152 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:40378 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40378 dst: /127.0.0.1:43601 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43601 remote=/127.0.0.1:40378]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,153 WARN [PacketResponder: BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:43601]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,153 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:31,154 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:58918 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741843_1024]] datanode.DataXceiver(323): 127.0.0.1:42741:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58918 dst: /127.0.0.1:42741 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,265 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:31,266 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid f05f79a8-d47c-45c0-a369-3e21977298bf) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:31,266 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data5/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:31,266 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data6/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:31,271 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,271 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,271 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C32895%2C1685519943260:(num 1685519967130) roll requested 2023-05-31 07:59:31,274 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741844_1026 2023-05-31 07:59:31,275 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:31,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32895] regionserver.HRegion(9158): Flush requested on 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:31,277 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0a38e21359722b2f9c82783be0a70a56 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 07:59:31,277 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741845_1027 2023-05-31 07:59:31,278 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] 2023-05-31 07:59:31,281 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38228 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741846_1028 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,281 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741846_1028 2023-05-31 07:59:31,282 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:31,284 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38228 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38228 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,286 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38232 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741847_1029 to mirror 127.0.0.1:40483: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,286 WARN [Thread-651] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741847_1029 2023-05-31 07:59:31,286 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38232 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38232 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,286 WARN [Thread-651] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK] 2023-05-31 07:59:31,287 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741848_1030 2023-05-31 07:59:31,287 WARN [IPC Server handler 4 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:31,287 WARN [IPC Server handler 4 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:31,288 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:31,288 WARN [IPC Server handler 4 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:31,290 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38260 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741850_1032]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741850_1032 to mirror 127.0.0.1:33623: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,291 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741850_1032 2023-05-31 07:59:31,291 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38260 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741850_1032]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38260 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,291 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] 2023-05-31 07:59:31,292 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519967130 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971271 2023-05-31 07:59:31,292 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,293 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519967130 is not closed yet, will try archiving it next time 2023-05-31 07:59:31,295 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38272 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741851_1033 to mirror 127.0.0.1:40483: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,296 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741851_1033 2023-05-31 07:59:31,296 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38272 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38272 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,296 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK] 2023-05-31 07:59:31,298 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38288 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741852_1034]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741852_1034 to mirror 127.0.0.1:40485: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,298 WARN [Thread-653] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741852_1034 2023-05-31 07:59:31,298 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38288 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741852_1034]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38288 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,299 WARN [Thread-653] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:31,300 WARN [IPC Server handler 4 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:31,300 WARN [IPC Server handler 4 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:31,300 WARN [IPC Server handler 4 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:31,495 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,496 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,496 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C32895%2C1685519943260:(num 1685519971271) roll requested 2023-05-31 07:59:31,500 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741854_1036 2023-05-31 07:59:31,502 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:31,504 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741855_1037 2023-05-31 07:59:31,505 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] 2023-05-31 07:59:31,508 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741856_1038 2023-05-31 07:59:31,509 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK] 2023-05-31 07:59:31,512 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38308 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741857_1039]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741857_1039 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:31,512 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741857_1039 2023-05-31 07:59:31,512 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:38308 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741857_1039]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38308 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:31,512 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:31,513 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:31,513 WARN [IPC Server handler 3 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:31,513 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:31,517 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971271 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971496 2023-05-31 07:59:31,518 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:31,518 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519967130 is not closed yet, will try archiving it next time 2023-05-31 07:59:31,518 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971271 is not closed yet, will try archiving it next time 2023-05-31 07:59:31,698 DEBUG [Close-WAL-Writer-0] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971271 is not closed yet, will try archiving it next time 2023-05-31 07:59:31,702 WARN [sync.1] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-31 07:59:31,705 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/37d4bf80db8241ca80cf7237e2d7b76d 2023-05-31 07:59:31,716 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/37d4bf80db8241ca80cf7237e2d7b76d as hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/37d4bf80db8241ca80cf7237e2d7b76d 2023-05-31 07:59:31,722 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/37d4bf80db8241ca80cf7237e2d7b76d, entries=5, sequenceid=12, filesize=10.0 K 2023-05-31 07:59:31,723 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 0a38e21359722b2f9c82783be0a70a56 in 446ms, sequenceid=12, compaction requested=false 2023-05-31 07:59:31,724 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0a38e21359722b2f9c82783be0a70a56: 2023-05-31 07:59:31,912 WARN [Listener at localhost.localdomain/36389] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:31,916 WARN [Listener at localhost.localdomain/36389] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:31,918 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:31,921 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519954954 2023-05-31 07:59:31,924 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/java.io.tmpdir/Jetty_localhost_44381_datanode____.hkskkt/webapp 2023-05-31 07:59:31,994 INFO [Listener at localhost.localdomain/36389] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44381 2023-05-31 07:59:31,996 WARN [master/jenkins-hbase16:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:31,997 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33919%2C1685519941529:(num 1685519941829) roll requested 2023-05-31 07:59:32,003 WARN [Thread-683] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741859_1041 2023-05-31 07:59:32,003 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:32,003 WARN [Thread-683] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33623,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK] 2023-05-31 07:59:32,004 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:32,005 WARN [Listener at localhost.localdomain/38935] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:32,008 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:38338 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741860_1042 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:32,008 WARN [Thread-683] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741860_1042 2023-05-31 07:59:32,009 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:38338 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38338 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:32,009 WARN [Thread-683] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:32,010 WARN [Thread-683] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741861_1043 2023-05-31 07:59:32,011 WARN [Thread-683] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:32,012 WARN [Thread-683] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741862_1044 2023-05-31 07:59:32,012 WARN [Thread-683] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK] 2023-05-31 07:59:32,013 WARN [IPC Server handler 1 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:32,013 WARN [IPC Server handler 1 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:32,013 WARN [IPC Server handler 1 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:32,016 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 07:59:32,016 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 with entries=88, filesize=43.74 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 2023-05-31 07:59:32,016 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:32,017 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 is not closed yet, will try archiving it next time 2023-05-31 07:59:32,016 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:32,017 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:32,307 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff1d947d12d58b51: Processing first storage report for DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31 from datanode 6277fbff-9b25-4167-9ac9-092927692ba5 2023-05-31 07:59:32,308 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff1d947d12d58b51: from storage DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31 node DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-31 07:59:32,308 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xff1d947d12d58b51: Processing first storage report for DS-ce4f9d28-cfd4-4740-b4c4-f7451b9e922b from datanode 6277fbff-9b25-4167-9ac9-092927692ba5 2023-05-31 07:59:32,308 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xff1d947d12d58b51: from storage DS-ce4f9d28-cfd4-4740-b4c4-f7451b9e922b node DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:35,584 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@687fd6b8] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:43601, datanodeUuid=89651c44-e03a-4d19-a740-be7cc6df83a2, infoPort=46259, infoSecurePort=0, ipcPort=39135, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741849_1031 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:43,309 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@770f99c8] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741835_1011 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:44,311 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@7e4b9b2e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741831_1007 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:44,312 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1856fcab] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741827_1003 to 127.0.0.1:42741 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:46,311 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6fad460e] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741828_1004 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:46,312 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@646743ea] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741826_1002 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:49,312 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@465726a3] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741825_1001 to 127.0.0.1:40485 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:49,312 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@694b3966] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741836_1012 to 127.0.0.1:42741 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,311 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@6c538f77] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741834_1010 to 127.0.0.1:42741 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,312 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@32f5e904] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:40499, datanodeUuid=6277fbff-9b25-4167-9ac9-092927692ba5, infoPort=37789, infoSecurePort=0, ipcPort=38935, storageInfo=lv=-57;cid=testClusterID;nsid=1556669837;c=1685519940089):Failed to transfer BP-935495920-188.40.62.62-1685519940089:blk_1073741830_1006 to 127.0.0.1:42741 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,487 INFO [Listener at localhost.localdomain/38935] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971496 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519990463 2023-05-31 07:59:50,487 DEBUG [Listener at localhost.localdomain/38935] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:50,487 DEBUG [Listener at localhost.localdomain/38935] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260/jenkins-hbase16.apache.org%2C32895%2C1685519943260.1685519971496 is not closed yet, will try archiving it next time 2023-05-31 07:59:50,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32895] regionserver.HRegion(9158): Flush requested on 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:50,492 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 0a38e21359722b2f9c82783be0a70a56 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-31 07:59:50,493 INFO [sync.0] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-31 07:59:50,507 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/57a3a033196b487a9de3385542c61c5a 2023-05-31 07:59:50,510 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 07:59:50,511 INFO [Listener at localhost.localdomain/38935] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 07:59:50,511 DEBUG [Listener at localhost.localdomain/38935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71be2718 to 127.0.0.1:51691 2023-05-31 07:59:50,511 DEBUG [Listener at localhost.localdomain/38935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,511 DEBUG [Listener at localhost.localdomain/38935] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 07:59:50,511 DEBUG [Listener at localhost.localdomain/38935] util.JVMClusterUtil(257): Found active master hash=655931458, stopped=false 2023-05-31 07:59:50,511 INFO [Listener at localhost.localdomain/38935] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:50,517 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/57a3a033196b487a9de3385542c61c5a as hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/57a3a033196b487a9de3385542c61c5a 2023-05-31 07:59:50,520 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:50,520 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:50,520 INFO [Listener at localhost.localdomain/38935] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 07:59:50,520 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:50,521 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:50,521 DEBUG [Listener at localhost.localdomain/38935] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x69cff0dd to 127.0.0.1:51691 2023-05-31 07:59:50,521 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:50,521 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:50,521 DEBUG [Listener at localhost.localdomain/38935] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,522 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:50,522 INFO [Listener at localhost.localdomain/38935] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,35401,1685519941680' ***** 2023-05-31 07:59:50,522 INFO [Listener at localhost.localdomain/38935] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 07:59:50,522 INFO [Listener at localhost.localdomain/38935] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,32895,1685519943260' ***** 2023-05-31 07:59:50,522 INFO [Listener at localhost.localdomain/38935] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 07:59:50,522 INFO [RS:0;jenkins-hbase16:35401] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 07:59:50,522 INFO [RS:1;jenkins-hbase16:32895] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 07:59:50,522 INFO [RS:0;jenkins-hbase16:35401] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 07:59:50,522 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 07:59:50,522 INFO [RS:0;jenkins-hbase16:35401] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 07:59:50,523 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(3303): Received CLOSE for e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:50,523 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:50,523 DEBUG [RS:0;jenkins-hbase16:35401] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x733e0b19 to 127.0.0.1:51691 2023-05-31 07:59:50,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing e8a8980874fa353f9ac8dd21156ef779, disabling compactions & flushes 2023-05-31 07:59:50,523 DEBUG [RS:0;jenkins-hbase16:35401] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,524 INFO [RS:0;jenkins-hbase16:35401] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 07:59:50,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,524 INFO [RS:0;jenkins-hbase16:35401] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 07:59:50,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. after waiting 0 ms 2023-05-31 07:59:50,524 INFO [RS:0;jenkins-hbase16:35401] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 07:59:50,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,524 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 07:59:50,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing e8a8980874fa353f9ac8dd21156ef779 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 07:59:50,524 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 07:59:50,524 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, e8a8980874fa353f9ac8dd21156ef779=hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779.} 2023-05-31 07:59:50,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:59:50,525 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1504): Waiting on 1588230740, e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:50,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:59:50,525 WARN [RS:0;jenkins-hbase16:35401.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:59:50,525 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C35401%2C1685519941680:(num 1685519942218) roll requested 2023-05-31 07:59:50,525 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/57a3a033196b487a9de3385542c61c5a, entries=8, sequenceid=25, filesize=13.2 K 2023-05-31 07:59:50,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for e8a8980874fa353f9ac8dd21156ef779: 2023-05-31 07:59:50,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:59:50,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:59:50,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-05-31 07:59:50,526 WARN [RS_OPEN_META-regionserver/jenkins-hbase16:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,526 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase16.apache.org,35401,1685519941680: Unrecoverable exception while closing hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,526 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:59:50,528 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 07:59:50,528 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 07:59:50,528 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 0a38e21359722b2f9c82783be0a70a56 in 36ms, sequenceid=25, compaction requested=false 2023-05-31 07:59:50,528 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 0a38e21359722b2f9c82783be0a70a56: 2023-05-31 07:59:50,529 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-31 07:59:50,529 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 07:59:50,529 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/57a3a033196b487a9de3385542c61c5a because midkey is the same as first or last row 2023-05-31 07:59:50,529 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 07:59:50,529 INFO [RS:1;jenkins-hbase16:32895] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 07:59:50,529 INFO [RS:1;jenkins-hbase16:32895] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 07:59:50,529 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(3303): Received CLOSE for 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:50,530 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:50,530 DEBUG [RS:1;jenkins-hbase16:32895] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c8305c8 to 127.0.0.1:51691 2023-05-31 07:59:50,530 DEBUG [RS:1;jenkins-hbase16:32895] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 0a38e21359722b2f9c82783be0a70a56, disabling compactions & flushes 2023-05-31 07:59:50,530 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-05-31 07:59:50,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:50,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:50,531 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1478): Online Regions={0a38e21359722b2f9c82783be0a70a56=TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56.} 2023-05-31 07:59:50,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. after waiting 0 ms 2023-05-31 07:59:50,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:50,532 DEBUG [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1504): Waiting on 0a38e21359722b2f9c82783be0a70a56 2023-05-31 07:59:50,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 0a38e21359722b2f9c82783be0a70a56 1/1 column families, dataSize=9.46 KB heapSize=10.38 KB 2023-05-31 07:59:50,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 07:59:50,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 07:59:50,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 07:59:50,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 07:59:50,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1009254400, "init": 524288000, "max": 2051014656, "used": 347969992 }, "NonHeapMemoryUsage": { "committed": 134021120, "init": 2555904, "max": -1, "used": 131351576 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 07:59:50,537 WARN [Thread-741] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741866_1048 2023-05-31 07:59:50,538 WARN [Thread-741] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:50,540 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33919] master.MasterRpcServices(609): jenkins-hbase16.apache.org,35401,1685519941680 reported a fatal error: ***** ABORTING region server jenkins-hbase16.apache.org,35401,1685519941680: Unrecoverable exception while closing hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,541 WARN [Thread-742] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741868_1050 2023-05-31 07:59:50,542 WARN [Thread-742] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:50,545 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:51542 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741869_1051]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current]'}, localName='127.0.0.1:43601', datanodeUuid='89651c44-e03a-4d19-a740-be7cc6df83a2', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741869_1051 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:50,545 WARN [Thread-742] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741869_1051 2023-05-31 07:59:50,546 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1837158181_17 at /127.0.0.1:51542 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741869_1051]] datanode.DataXceiver(323): 127.0.0.1:43601:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:51542 dst: /127.0.0.1:43601 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,547 WARN [Thread-742] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:50,555 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-31 07:59:50,556 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519942218 with entries=3, filesize=601 B; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519990525 2023-05-31 07:59:50,556 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK], DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:50,556 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,556 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519942218 is not closed yet, will try archiving it next time 2023-05-31 07:59:50,557 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.1685519942218; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,557 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta:.meta(num 1685519942403) roll requested 2023-05-31 07:59:50,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=9.46 KB at sequenceid=37 (bloomFilter=true), to=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/70d00434e1d649798a81d503159f73bc 2023-05-31 07:59:50,562 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:59236 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741871_1053]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data4/current]'}, localName='127.0.0.1:40499', datanodeUuid='6277fbff-9b25-4167-9ac9-092927692ba5', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741871_1053 to mirror 127.0.0.1:40485: java.net.ConnectException: Connection refused 2023-05-31 07:59:50,562 WARN [Thread-755] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741871_1053 2023-05-31 07:59:50,563 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_474598611_17 at /127.0.0.1:59236 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741871_1053]] datanode.DataXceiver(323): 127.0.0.1:40499:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59236 dst: /127.0.0.1:40499 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,563 WARN [Thread-755] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:50,564 WARN [Thread-755] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741872_1054 2023-05-31 07:59:50,564 WARN [Thread-755] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:50,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/.tmp/info/70d00434e1d649798a81d503159f73bc as hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/70d00434e1d649798a81d503159f73bc 2023-05-31 07:59:50,571 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-31 07:59:50,571 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519942403.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519990557.meta 2023-05-31 07:59:50,572 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK], DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:50,572 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519942403.meta is not closed yet, will try archiving it next time 2023-05-31 07:59:50,572 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,572 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680/jenkins-hbase16.apache.org%2C35401%2C1685519941680.meta.1685519942403.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40483,DS-d5740835-14eb-4a13-8d16-743bded2b924,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 07:59:50,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/info/70d00434e1d649798a81d503159f73bc, entries=9, sequenceid=37, filesize=14.2 K 2023-05-31 07:59:50,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~9.46 KB/9684, heapSize ~10.36 KB/10608, currentSize=0 B/0 for 0a38e21359722b2f9c82783be0a70a56 in 46ms, sequenceid=37, compaction requested=true 2023-05-31 07:59:50,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/data/default/TestLogRolling-testLogRollOnDatanodeDeath/0a38e21359722b2f9c82783be0a70a56/recovered.edits/40.seqid, newMaxSeqId=40, maxSeqId=1 2023-05-31 07:59:50,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:50,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 0a38e21359722b2f9c82783be0a70a56: 2023-05-31 07:59:50,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685519943387.0a38e21359722b2f9c82783be0a70a56. 2023-05-31 07:59:50,725 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 07:59:50,725 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(3303): Received CLOSE for e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:50,726 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:59:50,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing e8a8980874fa353f9ac8dd21156ef779, disabling compactions & flushes 2023-05-31 07:59:50,726 DEBUG [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1504): Waiting on 1588230740, e8a8980874fa353f9ac8dd21156ef779 2023-05-31 07:59:50,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. after waiting 0 ms 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,726 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:59:50,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:59:50,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for e8a8980874fa353f9ac8dd21156ef779: 2023-05-31 07:59:50,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:59:50,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685519942499.e8a8980874fa353f9ac8dd21156ef779. 2023-05-31 07:59:50,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-31 07:59:50,732 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,32895,1685519943260; all regions closed. 2023-05-31 07:59:50,732 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:50,748 DEBUG [RS:1;jenkins-hbase16:32895] wal.AbstractFSWAL(1028): Moved 4 WAL file(s) to /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs 2023-05-31 07:59:50,748 INFO [RS:1;jenkins-hbase16:32895] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C32895%2C1685519943260:(num 1685519990463) 2023-05-31 07:59:50,748 DEBUG [RS:1;jenkins-hbase16:32895] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,748 INFO [RS:1;jenkins-hbase16:32895] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:59:50,749 INFO [RS:1;jenkins-hbase16:32895] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 07:59:50,749 INFO [RS:1;jenkins-hbase16:32895] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 07:59:50,749 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 07:59:50,749 INFO [RS:1;jenkins-hbase16:32895] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 07:59:50,749 INFO [RS:1;jenkins-hbase16:32895] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 07:59:50,750 INFO [RS:1;jenkins-hbase16:32895] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:32895 2023-05-31 07:59:50,762 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:50,762 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,32895,1685519943260 2023-05-31 07:59:50,762 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:50,762 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:50,762 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:50,771 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,32895,1685519943260] 2023-05-31 07:59:50,771 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,32895,1685519943260; numProcessing=1 2023-05-31 07:59:50,779 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,32895,1685519943260 already deleted, retry=false 2023-05-31 07:59:50,779 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,32895,1685519943260 expired; onlineServers=1 2023-05-31 07:59:50,922 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:50,922 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:32895-0x100803f7dc40005, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:50,922 INFO [RS:1;jenkins-hbase16:32895] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,32895,1685519943260; zookeeper connection closed. 2023-05-31 07:59:50,925 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@20ed892f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@20ed892f 2023-05-31 07:59:50,927 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-31 07:59:50,927 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,35401,1685519941680; all regions closed. 2023-05-31 07:59:50,928 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:50,938 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/WALs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:50,943 DEBUG [RS:0;jenkins-hbase16:35401] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,944 INFO [RS:0;jenkins-hbase16:35401] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:59:50,944 INFO [RS:0;jenkins-hbase16:35401] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 07:59:50,944 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 07:59:50,945 INFO [RS:0;jenkins-hbase16:35401] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:35401 2023-05-31 07:59:50,954 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:50,954 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,35401,1685519941680 2023-05-31 07:59:50,962 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,35401,1685519941680] 2023-05-31 07:59:50,962 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,35401,1685519941680; numProcessing=2 2023-05-31 07:59:50,970 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,35401,1685519941680 already deleted, retry=false 2023-05-31 07:59:50,970 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,35401,1685519941680 expired; onlineServers=0 2023-05-31 07:59:50,970 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,33919,1685519941529' ***** 2023-05-31 07:59:50,971 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 07:59:50,972 DEBUG [M:0;jenkins-hbase16:33919] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23272879, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:59:50,972 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:50,972 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,33919,1685519941529; all regions closed. 2023-05-31 07:59:50,972 DEBUG [M:0;jenkins-hbase16:33919] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 07:59:50,972 DEBUG [M:0;jenkins-hbase16:33919] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 07:59:50,972 DEBUG [M:0;jenkins-hbase16:33919] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 07:59:50,972 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519941997] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519941997,5,FailOnTimeoutGroup] 2023-05-31 07:59:50,973 INFO [M:0;jenkins-hbase16:33919] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 07:59:50,973 INFO [M:0;jenkins-hbase16:33919] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 07:59:50,972 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 07:59:50,972 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519941997] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519941997,5,FailOnTimeoutGroup] 2023-05-31 07:59:50,974 INFO [M:0;jenkins-hbase16:33919] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 07:59:50,975 DEBUG [M:0;jenkins-hbase16:33919] master.HMaster(1512): Stopping service threads 2023-05-31 07:59:50,975 INFO [M:0;jenkins-hbase16:33919] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 07:59:50,976 ERROR [M:0;jenkins-hbase16:33919] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 07:59:50,976 INFO [M:0;jenkins-hbase16:33919] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 07:59:50,977 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 07:59:50,984 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 07:59:50,984 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:50,984 DEBUG [M:0;jenkins-hbase16:33919] zookeeper.ZKUtil(398): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 07:59:50,984 WARN [M:0;jenkins-hbase16:33919] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 07:59:50,984 INFO [M:0;jenkins-hbase16:33919] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 07:59:50,985 INFO [M:0;jenkins-hbase16:33919] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 07:59:50,986 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 07:59:50,986 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:50,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:59:50,986 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:50,986 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 07:59:50,986 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:50,987 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.11 KB heapSize=45.77 KB 2023-05-31 07:59:50,988 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:50,988 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:43601,DS-c25ab37c-1550-4f98-a2de-73dc094267bb,DISK]] 2023-05-31 07:59:50,988 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33919%2C1685519941529:(num 1685519971997) roll requested 2023-05-31 07:59:50,990 WARN [Thread-764] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741874_1056 2023-05-31 07:59:50,991 WARN [Thread-764] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:50,992 WARN [Thread-764] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741875_1057 2023-05-31 07:59:50,993 WARN [Thread-764] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:50,993 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:50,993 WARN [IPC Server handler 3 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:50,993 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:50,993 WARN [Thread-765] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741876_1058 2023-05-31 07:59:50,994 WARN [Thread-765] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:50,996 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:59274 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741878_1060]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data4/current]'}, localName='127.0.0.1:40499', datanodeUuid='6277fbff-9b25-4167-9ac9-092927692ba5', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741878_1060 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:50,996 WARN [Thread-765] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741878_1060 2023-05-31 07:59:50,997 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:59274 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741878_1060]] datanode.DataXceiver(323): 127.0.0.1:40499:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59274 dst: /127.0.0.1:40499 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:50,997 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 with entries=1, filesize=307 B; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 2023-05-31 07:59:50,998 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:50,998 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 is not closed yet, will try archiving it next time 2023-05-31 07:59:50,998 WARN [Thread-765] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:51,003 INFO [M:0;jenkins-hbase16:33919] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.11 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4c8b67c182d343f08575104eaae07460 2023-05-31 07:59:51,008 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4c8b67c182d343f08575104eaae07460 as hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4c8b67c182d343f08575104eaae07460 2023-05-31 07:59:51,013 INFO [M:0;jenkins-hbase16:33919] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4c8b67c182d343f08575104eaae07460, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 07:59:51,015 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:51,015 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:51,015 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegion(2948): Finished flush of dataSize ~38.11 KB/39023, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=92, compaction requested=false 2023-05-31 07:59:51,015 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33919%2C1685519941529:(num 1685519990988) roll requested 2023-05-31 07:59:51,016 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:51,017 DEBUG [M:0;jenkins-hbase16:33919] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:59:51,018 WARN [Thread-775] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741880_1062 2023-05-31 07:59:51,019 WARN [Thread-775] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:40485,DS-69632975-7602-4370-8711-4365abf9392e,DISK] 2023-05-31 07:59:51,021 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:59292 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741881_1063]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data4/current]'}, localName='127.0.0.1:40499', datanodeUuid='6277fbff-9b25-4167-9ac9-092927692ba5', xmitsInProgress=0}:Exception transfering block BP-935495920-188.40.62.62-1685519940089:blk_1073741881_1063 to mirror 127.0.0.1:42741: java.net.ConnectException: Connection refused 2023-05-31 07:59:51,021 WARN [Thread-775] hdfs.DataStreamer(1658): Abandoning BP-935495920-188.40.62.62-1685519940089:blk_1073741881_1063 2023-05-31 07:59:51,021 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-610355028_17 at /127.0.0.1:59292 [Receiving block BP-935495920-188.40.62.62-1685519940089:blk_1073741881_1063]] datanode.DataXceiver(323): 127.0.0.1:40499:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:59292 dst: /127.0.0.1:40499 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 07:59:51,021 WARN [Thread-775] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42741,DS-f6ef133c-bd9a-4bfb-99b5-559be363bd5d,DISK] 2023-05-31 07:59:51,022 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-31 07:59:51,022 WARN [IPC Server handler 3 on default port 38437] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-31 07:59:51,022 WARN [IPC Server handler 3 on default port 38437] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-31 07:59:51,026 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 with entries=1, filesize=341 B; new WAL /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519991016 2023-05-31 07:59:51,026 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40499,DS-00be9d5d-4bc7-46cd-96d6-e99a6c309c31,DISK]] 2023-05-31 07:59:51,026 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 is not closed yet, will try archiving it next time 2023-05-31 07:59:51,026 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529 2023-05-31 07:59:51,027 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 2023-05-31 07:59:51,031 INFO [WAL-Archive-0] region.MasterRegionUtils(50): Moved hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519941829$masterlocalwal$ 2023-05-31 07:59:51,031 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 2023-05-31 07:59:51,033 INFO [WAL-Archive-0] region.MasterRegionUtils(50): Moved hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519971997$masterlocalwal$ 2023-05-31 07:59:51,062 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:51,062 INFO [RS:0;jenkins-hbase16:35401] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,35401,1685519941680; zookeeper connection closed. 2023-05-31 07:59:51,062 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): regionserver:35401-0x100803f7dc40001, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:51,063 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@41633302] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@41633302 2023-05-31 07:59:51,064 INFO [Listener at localhost.localdomain/38935] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-31 07:59:51,363 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:59:51,430 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/WALs/jenkins-hbase16.apache.org,33919,1685519941529/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 2023-05-31 07:59:51,433 INFO [WAL-Archive-0] region.MasterRegionUtils(50): Moved hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/MasterData/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988 to hdfs://localhost.localdomain:38437/user/jenkins/test-data/17f6aefc-2683-9708-460a-ff66dfc96bff/oldWALs/jenkins-hbase16.apache.org%2C33919%2C1685519941529.1685519990988$masterlocalwal$ 2023-05-31 07:59:51,433 INFO [M:0;jenkins-hbase16:33919] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 07:59:51,433 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 07:59:51,434 INFO [M:0;jenkins-hbase16:33919] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:33919 2023-05-31 07:59:51,450 DEBUG [M:0;jenkins-hbase16:33919] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,33919,1685519941529 already deleted, retry=false 2023-05-31 07:59:51,567 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:51,567 DEBUG [Listener at localhost.localdomain/43413-EventThread] zookeeper.ZKWatcher(600): master:33919-0x100803f7dc40000, quorum=127.0.0.1:51691, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 07:59:51,567 INFO [M:0;jenkins-hbase16:33919] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,33919,1685519941529; zookeeper connection closed. 2023-05-31 07:59:51,568 WARN [Listener at localhost.localdomain/38935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:51,573 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:51,678 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:51,678 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid 6277fbff-9b25-4167-9ac9-092927692ba5) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:51,679 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data3/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:51,680 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data4/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:51,683 WARN [Listener at localhost.localdomain/38935] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 07:59:51,687 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 07:59:51,795 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 07:59:51,795 WARN [BP-935495920-188.40.62.62-1685519940089 heartbeating to localhost.localdomain/127.0.0.1:38437] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-935495920-188.40.62.62-1685519940089 (Datanode Uuid 89651c44-e03a-4d19-a740-be7cc6df83a2) service to localhost.localdomain/127.0.0.1:38437 2023-05-31 07:59:51,797 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data7/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:51,798 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/cluster_bc207405-3b8f-a6d3-d352-f1b2a325978a/dfs/data/data8/current/BP-935495920-188.40.62.62-1685519940089] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 07:59:51,812 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 07:59:51,929 INFO [Listener at localhost.localdomain/38935] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 07:59:51,968 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 07:59:51,977 INFO [Listener at localhost.localdomain/38935] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=78 (was 51) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:38437 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:38437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:38437 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/38935 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase16:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:38437 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:38437 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:38437 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=463 (was 442) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=128 (was 91) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 166), AvailableMemoryMB=7879 (was 8090) 2023-05-31 07:59:51,985 INFO [Listener at localhost.localdomain/38935] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=78, OpenFileDescriptor=463, MaxFileDescriptor=60000, SystemLoadAverage=128, ProcessCount=165, AvailableMemoryMB=7879 2023-05-31 07:59:51,986 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 07:59:51,986 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/hadoop.log.dir so I do NOT create it in target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8 2023-05-31 07:59:51,986 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/52b3838c-a300-157d-ddb6-9cfd67ba6f6b/hadoop.tmp.dir so I do NOT create it in target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8 2023-05-31 07:59:51,986 INFO [Listener at localhost.localdomain/38935] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71, deleteOnExit=true 2023-05-31 07:59:51,986 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/test.cache.data in system properties and HBase conf 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/hadoop.log.dir in system properties and HBase conf 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 07:59:51,987 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 07:59:51,987 DEBUG [Listener at localhost.localdomain/38935] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:59:51,988 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 07:59:51,989 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/nfs.dump.dir in system properties and HBase conf 2023-05-31 07:59:51,989 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir in system properties and HBase conf 2023-05-31 07:59:51,989 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 07:59:51,989 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 07:59:51,989 INFO [Listener at localhost.localdomain/38935] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 07:59:51,991 WARN [Listener at localhost.localdomain/38935] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:59:51,992 WARN [Listener at localhost.localdomain/38935] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:59:51,992 WARN [Listener at localhost.localdomain/38935] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:59:52,093 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 07:59:52,221 WARN [Listener at localhost.localdomain/38935] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:52,225 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:52,230 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_localdomain_38225_hdfs____.58x9di/webapp 2023-05-31 07:59:52,301 INFO [Listener at localhost.localdomain/38935] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38225 2023-05-31 07:59:52,303 WARN [Listener at localhost.localdomain/38935] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 07:59:52,305 WARN [Listener at localhost.localdomain/38935] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 07:59:52,305 WARN [Listener at localhost.localdomain/38935] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 07:59:52,447 WARN [Listener at localhost.localdomain/41475] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:52,460 WARN [Listener at localhost.localdomain/41475] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:52,463 WARN [Listener at localhost.localdomain/41475] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:52,464 INFO [Listener at localhost.localdomain/41475] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:52,470 INFO [Listener at localhost.localdomain/41475] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_34297_datanode____.vc2g96/webapp 2023-05-31 07:59:52,543 INFO [Listener at localhost.localdomain/41475] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34297 2023-05-31 07:59:52,549 WARN [Listener at localhost.localdomain/46147] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:52,559 WARN [Listener at localhost.localdomain/46147] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 07:59:52,561 WARN [Listener at localhost.localdomain/46147] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 07:59:52,562 INFO [Listener at localhost.localdomain/46147] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 07:59:52,566 INFO [Listener at localhost.localdomain/46147] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_34053_datanode____.scgi4s/webapp 2023-05-31 07:59:52,639 INFO [Listener at localhost.localdomain/46147] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34053 2023-05-31 07:59:52,645 WARN [Listener at localhost.localdomain/41991] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 07:59:53,249 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa40284fa2f9bfaf9: Processing first storage report for DS-1393a46f-cd72-46fd-aa17-d5732ae7149c from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 07:59:53,249 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa40284fa2f9bfaf9: from storage DS-1393a46f-cd72-46fd-aa17-d5732ae7149c node DatanodeRegistration(127.0.0.1:40683, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=41809, infoSecurePort=0, ipcPort=46147, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:53,249 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa40284fa2f9bfaf9: Processing first storage report for DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 07:59:53,249 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa40284fa2f9bfaf9: from storage DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 node DatanodeRegistration(127.0.0.1:40683, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=41809, infoSecurePort=0, ipcPort=46147, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:53,318 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaecd09218b0ad221: Processing first storage report for DS-82ce75f2-750f-413d-8328-131b9627e067 from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 07:59:53,318 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaecd09218b0ad221: from storage DS-82ce75f2-750f-413d-8328-131b9627e067 node DatanodeRegistration(127.0.0.1:42457, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=39305, infoSecurePort=0, ipcPort=41991, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:53,318 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xaecd09218b0ad221: Processing first storage report for DS-13871883-59e8-4505-8906-87d42f4e8a6e from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 07:59:53,318 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xaecd09218b0ad221: from storage DS-13871883-59e8-4505-8906-87d42f4e8a6e node DatanodeRegistration(127.0.0.1:42457, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=39305, infoSecurePort=0, ipcPort=41991, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 07:59:53,357 DEBUG [Listener at localhost.localdomain/41991] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8 2023-05-31 07:59:53,359 INFO [Listener at localhost.localdomain/41991] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/zookeeper_0, clientPort=50938, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 07:59:53,361 INFO [Listener at localhost.localdomain/41991] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50938 2023-05-31 07:59:53,361 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,362 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,377 INFO [Listener at localhost.localdomain/41991] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70 with version=8 2023-05-31 07:59:53,378 INFO [Listener at localhost.localdomain/41991] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 07:59:53,379 INFO [Listener at localhost.localdomain/41991] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:59:53,379 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,379 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,380 INFO [Listener at localhost.localdomain/41991] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:59:53,380 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,380 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:59:53,380 INFO [Listener at localhost.localdomain/41991] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:59:53,381 INFO [Listener at localhost.localdomain/41991] ipc.NettyRpcServer(120): Bind to /188.40.62.62:33145 2023-05-31 07:59:53,382 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,383 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,383 INFO [Listener at localhost.localdomain/41991] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33145 connecting to ZooKeeper ensemble=127.0.0.1:50938 2023-05-31 07:59:53,422 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:331450x0, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:59:53,423 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33145-0x100804048540000 connected 2023-05-31 07:59:53,487 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:59:53,488 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:53,489 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:59:53,490 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-31 07:59:53,490 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33145 2023-05-31 07:59:53,490 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33145 2023-05-31 07:59:53,491 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-31 07:59:53,492 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33145 2023-05-31 07:59:53,492 INFO [Listener at localhost.localdomain/41991] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70, hbase.cluster.distributed=false 2023-05-31 07:59:53,510 INFO [Listener at localhost.localdomain/41991] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 07:59:53,510 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,510 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,511 INFO [Listener at localhost.localdomain/41991] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 07:59:53,511 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 07:59:53,511 INFO [Listener at localhost.localdomain/41991] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 07:59:53,511 INFO [Listener at localhost.localdomain/41991] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 07:59:53,512 INFO [Listener at localhost.localdomain/41991] ipc.NettyRpcServer(120): Bind to /188.40.62.62:43665 2023-05-31 07:59:53,512 INFO [Listener at localhost.localdomain/41991] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 07:59:53,513 DEBUG [Listener at localhost.localdomain/41991] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 07:59:53,513 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,514 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,515 INFO [Listener at localhost.localdomain/41991] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43665 connecting to ZooKeeper ensemble=127.0.0.1:50938 2023-05-31 07:59:53,525 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:436650x0, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 07:59:53,526 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): regionserver:436650x0, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 07:59:53,526 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43665-0x100804048540001 connected 2023-05-31 07:59:53,527 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 07:59:53,527 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ZKUtil(164): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 07:59:53,527 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43665 2023-05-31 07:59:53,528 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43665 2023-05-31 07:59:53,528 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43665 2023-05-31 07:59:53,528 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43665 2023-05-31 07:59:53,528 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43665 2023-05-31 07:59:53,529 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,537 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:59:53,537 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,545 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:59:53,545 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 07:59:53,545 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,546 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:59:53,547 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,33145,1685519993379 from backup master directory 2023-05-31 07:59:53,548 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 07:59:53,558 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,558 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:59:53,558 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 07:59:53,558 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,578 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/hbase.id with ID: 9d08f9ef-077d-42b2-8354-16a17ae17181 2023-05-31 07:59:53,591 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:53,603 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,612 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e04fc4e to 127.0.0.1:50938 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:53,621 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31fb03ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:53,621 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:53,622 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 07:59:53,622 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:53,623 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store-tmp 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 07:59:53,633 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:53,633 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 07:59:53,633 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:59:53,634 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,637 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C33145%2C1685519993379, suffix=, logDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379, archiveDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/oldWALs, maxLogs=10 2023-05-31 07:59:53,645 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685519993638 2023-05-31 07:59:53,646 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 07:59:53,646 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:53,646 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:53,646 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,646 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,649 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,651 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 07:59:53,651 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 07:59:53,652 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:53,653 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,653 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,656 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 07:59:53,666 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:53,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=737514, jitterRate=-0.06220346689224243}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:53,667 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 07:59:53,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 07:59:53,669 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 07:59:53,669 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 07:59:53,669 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 07:59:53,670 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 07:59:53,670 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 07:59:53,670 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 07:59:53,671 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 07:59:53,672 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 07:59:53,682 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 07:59:53,683 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 07:59:53,684 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 07:59:53,684 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 07:59:53,684 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 07:59:53,695 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,696 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 07:59:53,696 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 07:59:53,697 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 07:59:53,703 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:53,703 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 07:59:53,704 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,704 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,33145,1685519993379, sessionid=0x100804048540000, setting cluster-up flag (Was=false) 2023-05-31 07:59:53,720 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,745 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 07:59:53,747 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,767 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:53,792 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 07:59:53,793 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:53,793 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.hbase-snapshot/.tmp 2023-05-31 07:59:53,797 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:59:53,798 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685520023798 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 07:59:53,799 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 07:59:53,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 07:59:53,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 07:59:53,800 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 07:59:53,800 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:53,801 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 07:59:53,801 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 07:59:53,801 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 07:59:53,801 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 07:59:53,801 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519993801,5,FailOnTimeoutGroup] 2023-05-31 07:59:53,801 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519993801,5,FailOnTimeoutGroup] 2023-05-31 07:59:53,802 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,802 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 07:59:53,802 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,802 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,802 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:53,816 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:53,817 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:53,817 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70 2023-05-31 07:59:53,826 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:53,827 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:59:53,829 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/info 2023-05-31 07:59:53,830 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:59:53,830 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(951): ClusterId : 9d08f9ef-077d-42b2-8354-16a17ae17181 2023-05-31 07:59:53,830 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:53,832 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 07:59:53,832 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:59:53,834 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:53,835 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:59:53,835 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:53,835 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:59:53,837 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/table 2023-05-31 07:59:53,837 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:59:53,838 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:53,839 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740 2023-05-31 07:59:53,839 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740 2023-05-31 07:59:53,842 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:59:53,842 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 07:59:53,842 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 07:59:53,843 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:59:53,845 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:53,846 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=708735, jitterRate=-0.09879688918590546}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:59:53,846 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:59:53,846 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 07:59:53,846 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 07:59:53,846 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 07:59:53,846 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 07:59:53,846 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 07:59:53,848 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 07:59:53,848 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 07:59:53,849 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 07:59:53,849 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 07:59:53,849 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 07:59:53,851 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 07:59:53,851 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 07:59:53,853 DEBUG [RS:0;jenkins-hbase16:43665] zookeeper.ReadOnlyZKClient(139): Connect 0x5f1e0a12 to 127.0.0.1:50938 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:53,853 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 07:59:53,862 DEBUG [RS:0;jenkins-hbase16:43665] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b219529, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:53,862 DEBUG [RS:0;jenkins-hbase16:43665] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e230029, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 07:59:53,871 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:43665 2023-05-31 07:59:53,871 INFO [RS:0;jenkins-hbase16:43665] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 07:59:53,871 INFO [RS:0;jenkins-hbase16:43665] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 07:59:53,871 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 07:59:53,871 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,33145,1685519993379 with isa=jenkins-hbase16.apache.org/188.40.62.62:43665, startcode=1685519993510 2023-05-31 07:59:53,872 DEBUG [RS:0;jenkins-hbase16:43665] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 07:59:53,875 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:45297, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 07:59:53,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,877 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70 2023-05-31 07:59:53,877 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41475 2023-05-31 07:59:53,877 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 07:59:53,887 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 07:59:53,887 DEBUG [RS:0;jenkins-hbase16:43665] zookeeper.ZKUtil(162): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,888 WARN [RS:0;jenkins-hbase16:43665] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 07:59:53,888 INFO [RS:0;jenkins-hbase16:43665] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:53,888 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,888 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,43665,1685519993510] 2023-05-31 07:59:53,894 DEBUG [RS:0;jenkins-hbase16:43665] zookeeper.ZKUtil(162): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,895 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 07:59:53,895 INFO [RS:0;jenkins-hbase16:43665] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 07:59:53,897 INFO [RS:0;jenkins-hbase16:43665] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 07:59:53,898 INFO [RS:0;jenkins-hbase16:43665] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 07:59:53,898 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,898 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 07:59:53,900 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,900 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 07:59:53,901 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,901 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,901 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,901 DEBUG [RS:0;jenkins-hbase16:43665] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 07:59:53,903 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,903 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,903 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,915 INFO [RS:0;jenkins-hbase16:43665] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 07:59:53,915 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43665,1685519993510-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:53,923 INFO [RS:0;jenkins-hbase16:43665] regionserver.Replication(203): jenkins-hbase16.apache.org,43665,1685519993510 started 2023-05-31 07:59:53,923 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,43665,1685519993510, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:43665, sessionid=0x100804048540001 2023-05-31 07:59:53,923 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 07:59:53,923 DEBUG [RS:0;jenkins-hbase16:43665] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,923 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,43665,1685519993510' 2023-05-31 07:59:53,923 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 07:59:53,924 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 07:59:53,924 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 07:59:53,924 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 07:59:53,925 DEBUG [RS:0;jenkins-hbase16:43665] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:53,925 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,43665,1685519993510' 2023-05-31 07:59:53,925 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 07:59:53,925 DEBUG [RS:0;jenkins-hbase16:43665] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 07:59:53,925 DEBUG [RS:0;jenkins-hbase16:43665] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 07:59:53,926 INFO [RS:0;jenkins-hbase16:43665] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 07:59:53,926 INFO [RS:0;jenkins-hbase16:43665] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 07:59:54,003 DEBUG [jenkins-hbase16:33145] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 07:59:54,004 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,43665,1685519993510, state=OPENING 2023-05-31 07:59:54,012 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 07:59:54,020 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:54,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,43665,1685519993510}] 2023-05-31 07:59:54,021 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:59:54,029 INFO [RS:0;jenkins-hbase16:43665] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C43665%2C1685519993510, suffix=, logDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510, archiveDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/oldWALs, maxLogs=32 2023-05-31 07:59:54,040 INFO [RS:0;jenkins-hbase16:43665] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 2023-05-31 07:59:54,040 DEBUG [RS:0;jenkins-hbase16:43665] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 07:59:54,178 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:54,178 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 07:59:54,181 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:42096, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 07:59:54,187 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 07:59:54,187 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 07:59:54,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C43665%2C1685519993510.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510, archiveDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/oldWALs, maxLogs=32 2023-05-31 07:59:54,217 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.meta.1685519994194.meta 2023-05-31 07:59:54,217 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 07:59:54,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:54,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 07:59:54,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 07:59:54,218 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 07:59:54,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 07:59:54,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:54,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 07:59:54,219 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 07:59:54,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 07:59:54,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/info 2023-05-31 07:59:54,222 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/info 2023-05-31 07:59:54,223 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 07:59:54,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:54,224 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 07:59:54,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:54,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/rep_barrier 2023-05-31 07:59:54,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 07:59:54,225 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:54,225 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 07:59:54,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/table 2023-05-31 07:59:54,226 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740/table 2023-05-31 07:59:54,226 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 07:59:54,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:54,228 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740 2023-05-31 07:59:54,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/meta/1588230740 2023-05-31 07:59:54,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 07:59:54,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 07:59:54,233 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=764197, jitterRate=-0.02827438712120056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 07:59:54,233 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 07:59:54,235 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685519994178 2023-05-31 07:59:54,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 07:59:54,238 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 07:59:54,239 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,43665,1685519993510, state=OPEN 2023-05-31 07:59:54,245 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 07:59:54,245 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 07:59:54,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 07:59:54,249 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,43665,1685519993510 in 224 msec 2023-05-31 07:59:54,251 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 07:59:54,252 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 399 msec 2023-05-31 07:59:54,253 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 457 msec 2023-05-31 07:59:54,254 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685519994254, completionTime=-1 2023-05-31 07:59:54,254 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 07:59:54,254 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 07:59:54,256 DEBUG [hconnection-0x2b64aecd-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:59:54,258 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:42112, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:59:54,260 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 07:59:54,260 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685520054260 2023-05-31 07:59:54,260 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685520114260 2023-05-31 07:59:54,260 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-31 07:59:54,279 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33145,1685519993379-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:54,279 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33145,1685519993379-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:54,279 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33145,1685519993379-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:54,279 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:33145, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:54,279 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 07:59:54,280 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 07:59:54,280 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:54,282 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 07:59:54,282 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 07:59:54,284 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:59:54,286 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:59:54,287 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,288 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326 empty. 2023-05-31 07:59:54,288 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,289 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 07:59:54,300 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:54,302 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8c17b1cd5a69f0f3c6183bb83ca1c326, NAME => 'hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8c17b1cd5a69f0f3c6183bb83ca1c326, disabling compactions & flushes 2023-05-31 07:59:54,313 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. after waiting 0 ms 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,313 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,313 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8c17b1cd5a69f0f3c6183bb83ca1c326: 2023-05-31 07:59:54,316 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:59:54,317 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519994316"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519994316"}]},"ts":"1685519994316"} 2023-05-31 07:59:54,319 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:59:54,320 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:59:54,320 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519994320"}]},"ts":"1685519994320"} 2023-05-31 07:59:54,322 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 07:59:54,362 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c17b1cd5a69f0f3c6183bb83ca1c326, ASSIGN}] 2023-05-31 07:59:54,365 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8c17b1cd5a69f0f3c6183bb83ca1c326, ASSIGN 2023-05-31 07:59:54,366 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8c17b1cd5a69f0f3c6183bb83ca1c326, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,43665,1685519993510; forceNewPlan=false, retain=false 2023-05-31 07:59:54,517 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8c17b1cd5a69f0f3c6183bb83ca1c326, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:54,518 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519994517"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519994517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519994517"}]},"ts":"1685519994517"} 2023-05-31 07:59:54,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8c17b1cd5a69f0f3c6183bb83ca1c326, server=jenkins-hbase16.apache.org,43665,1685519993510}] 2023-05-31 07:59:54,680 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8c17b1cd5a69f0f3c6183bb83ca1c326, NAME => 'hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:54,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:54,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,682 INFO [StoreOpener-8c17b1cd5a69f0f3c6183bb83ca1c326-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,684 DEBUG [StoreOpener-8c17b1cd5a69f0f3c6183bb83ca1c326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326/info 2023-05-31 07:59:54,684 DEBUG [StoreOpener-8c17b1cd5a69f0f3c6183bb83ca1c326-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326/info 2023-05-31 07:59:54,684 INFO [StoreOpener-8c17b1cd5a69f0f3c6183bb83ca1c326-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8c17b1cd5a69f0f3c6183bb83ca1c326 columnFamilyName info 2023-05-31 07:59:54,685 INFO [StoreOpener-8c17b1cd5a69f0f3c6183bb83ca1c326-1] regionserver.HStore(310): Store=8c17b1cd5a69f0f3c6183bb83ca1c326/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:54,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,687 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,691 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 07:59:54,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/hbase/namespace/8c17b1cd5a69f0f3c6183bb83ca1c326/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:54,695 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 8c17b1cd5a69f0f3c6183bb83ca1c326; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=788564, jitterRate=0.0027119815349578857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:54,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 8c17b1cd5a69f0f3c6183bb83ca1c326: 2023-05-31 07:59:54,697 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326., pid=6, masterSystemTime=1685519994675 2023-05-31 07:59:54,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,701 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 07:59:54,702 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8c17b1cd5a69f0f3c6183bb83ca1c326, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:54,702 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685519994702"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519994702"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519994702"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519994702"}]},"ts":"1685519994702"} 2023-05-31 07:59:54,709 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 07:59:54,709 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8c17b1cd5a69f0f3c6183bb83ca1c326, server=jenkins-hbase16.apache.org,43665,1685519993510 in 184 msec 2023-05-31 07:59:54,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 07:59:54,712 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8c17b1cd5a69f0f3c6183bb83ca1c326, ASSIGN in 347 msec 2023-05-31 07:59:54,712 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:59:54,713 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519994712"}]},"ts":"1685519994712"} 2023-05-31 07:59:54,714 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 07:59:54,726 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:59:54,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 447 msec 2023-05-31 07:59:54,784 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 07:59:54,795 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:54,796 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:54,805 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 07:59:54,825 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:54,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-05-31 07:59:54,848 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 07:59:54,862 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 07:59:54,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 25 msec 2023-05-31 07:59:54,900 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 07:59:54,916 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 07:59:54,917 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.358sec 2023-05-31 07:59:54,917 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 07:59:54,918 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 07:59:54,918 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 07:59:54,918 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33145,1685519993379-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 07:59:54,919 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,33145,1685519993379-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 07:59:54,922 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 07:59:54,931 DEBUG [Listener at localhost.localdomain/41991] zookeeper.ReadOnlyZKClient(139): Connect 0x752a4c73 to 127.0.0.1:50938 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 07:59:54,947 DEBUG [Listener at localhost.localdomain/41991] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2cd3a051, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 07:59:54,951 DEBUG [hconnection-0x18144720-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 07:59:54,956 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:42124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 07:59:54,960 INFO [Listener at localhost.localdomain/41991] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 07:59:54,960 INFO [Listener at localhost.localdomain/41991] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 07:59:55,050 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 07:59:55,051 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 07:59:55,053 INFO [Listener at localhost.localdomain/41991] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 07:59:55,053 INFO [Listener at localhost.localdomain/41991] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-31 07:59:55,054 INFO [Listener at localhost.localdomain/41991] wal.TestLogRolling(432): Replication=2 2023-05-31 07:59:55,058 DEBUG [Listener at localhost.localdomain/41991] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 07:59:55,063 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:57390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 07:59:55,065 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 07:59:55,066 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 07:59:55,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 07:59:55,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 07:59:55,071 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 07:59:55,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.MasterRpcServices(697): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-31 07:59:55,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 07:59:55,072 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 07:59:55,074 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,074 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5 empty. 2023-05-31 07:59:55,075 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,075 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-31 07:59:55,491 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-31 07:59:55,494 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 75665e98cb5393fd33ec6fceea7403c5, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/.tmp 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 75665e98cb5393fd33ec6fceea7403c5, disabling compactions & flushes 2023-05-31 07:59:55,507 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. after waiting 0 ms 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,507 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,507 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 75665e98cb5393fd33ec6fceea7403c5: 2023-05-31 07:59:55,511 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 07:59:55,512 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685519995511"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685519995511"}]},"ts":"1685519995511"} 2023-05-31 07:59:55,514 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 07:59:55,515 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 07:59:55,515 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519995515"}]},"ts":"1685519995515"} 2023-05-31 07:59:55,517 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-31 07:59:55,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=75665e98cb5393fd33ec6fceea7403c5, ASSIGN}] 2023-05-31 07:59:55,565 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=75665e98cb5393fd33ec6fceea7403c5, ASSIGN 2023-05-31 07:59:55,567 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=75665e98cb5393fd33ec6fceea7403c5, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,43665,1685519993510; forceNewPlan=false, retain=false 2023-05-31 07:59:55,719 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=75665e98cb5393fd33ec6fceea7403c5, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:55,719 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685519995718"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685519995718"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685519995718"}]},"ts":"1685519995718"} 2023-05-31 07:59:55,723 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 75665e98cb5393fd33ec6fceea7403c5, server=jenkins-hbase16.apache.org,43665,1685519993510}] 2023-05-31 07:59:55,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 75665e98cb5393fd33ec6fceea7403c5, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.', STARTKEY => '', ENDKEY => ''} 2023-05-31 07:59:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 07:59:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,889 INFO [StoreOpener-75665e98cb5393fd33ec6fceea7403c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,891 DEBUG [StoreOpener-75665e98cb5393fd33ec6fceea7403c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5/info 2023-05-31 07:59:55,891 DEBUG [StoreOpener-75665e98cb5393fd33ec6fceea7403c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5/info 2023-05-31 07:59:55,892 INFO [StoreOpener-75665e98cb5393fd33ec6fceea7403c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 75665e98cb5393fd33ec6fceea7403c5 columnFamilyName info 2023-05-31 07:59:55,893 INFO [StoreOpener-75665e98cb5393fd33ec6fceea7403c5-1] regionserver.HStore(310): Store=75665e98cb5393fd33ec6fceea7403c5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 07:59:55,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 07:59:55,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/data/default/TestLogRolling-testLogRollOnPipelineRestart/75665e98cb5393fd33ec6fceea7403c5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 07:59:55,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 75665e98cb5393fd33ec6fceea7403c5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751246, jitterRate=-0.044741347432136536}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 07:59:55,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 75665e98cb5393fd33ec6fceea7403c5: 2023-05-31 07:59:55,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5., pid=11, masterSystemTime=1685519995877 2023-05-31 07:59:55,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 07:59:55,904 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=75665e98cb5393fd33ec6fceea7403c5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 07:59:55,905 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685519995904"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685519995904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685519995904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685519995904"}]},"ts":"1685519995904"} 2023-05-31 07:59:55,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 07:59:55,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 75665e98cb5393fd33ec6fceea7403c5, server=jenkins-hbase16.apache.org,43665,1685519993510 in 184 msec 2023-05-31 07:59:55,911 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 07:59:55,911 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=75665e98cb5393fd33ec6fceea7403c5, ASSIGN in 347 msec 2023-05-31 07:59:55,912 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 07:59:55,912 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685519995912"}]},"ts":"1685519995912"} 2023-05-31 07:59:55,914 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-31 07:59:55,921 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 07:59:55,923 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 855 msec 2023-05-31 07:59:56,122 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 07:59:59,896 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-31 08:00:01,390 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 08:00:05,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 08:00:05,073 INFO [Listener at localhost.localdomain/41991] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-31 08:00:05,075 DEBUG [Listener at localhost.localdomain/41991] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-31 08:00:05,076 DEBUG [Listener at localhost.localdomain/41991] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:07,081 INFO [Listener at localhost.localdomain/41991] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 2023-05-31 08:00:07,082 WARN [Listener at localhost.localdomain/41991] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:07,084 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,088 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,086 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,089 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]) is bad. 2023-05-31 08:00:07,089 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685519993638 block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]) is bad. 2023-05-31 08:00:07,089 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.meta.1685519994194.meta block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42457,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]) is bad. 2023-05-31 08:00:07,097 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-164752341_17 at /127.0.0.1:58702 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58702 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40683 remote=/127.0.0.1:58702]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,098 WARN [PacketResponder: BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40683]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,098 WARN [PacketResponder: BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40683]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,098 WARN [PacketResponder: BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:40683]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,097 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:58736 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58736 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40683 remote=/127.0.0.1:58736]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,097 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:58732 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58732 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:40683 remote=/127.0.0.1:58732]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,100 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:41770 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:42457:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41770 dst: /127.0.0.1:42457 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,099 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:41772 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:42457:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41772 dst: /127.0.0.1:42457 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,103 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-164752341_17 at /127.0.0.1:41734 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:42457:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41734 dst: /127.0.0.1:42457 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,123 INFO [Listener at localhost.localdomain/41991] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:07,227 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:07,227 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:07,229 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data3/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:07,229 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data4/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:07,240 WARN [Listener at localhost.localdomain/41991] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:07,243 WARN [Listener at localhost.localdomain/41991] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:07,244 INFO [Listener at localhost.localdomain/41991] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:07,249 INFO [Listener at localhost.localdomain/41991] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_38751_datanode____.jy8xhn/webapp 2023-05-31 08:00:07,320 INFO [Listener at localhost.localdomain/41991] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38751 2023-05-31 08:00:07,328 WARN [Listener at localhost.localdomain/44809] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:07,331 WARN [Listener at localhost.localdomain/44809] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:07,331 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,332 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,331 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:07,335 INFO [Listener at localhost.localdomain/44809] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:07,439 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:60324 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60324 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,439 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-164752341_17 at /127.0.0.1:60298 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60298 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,439 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:07,440 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:60314 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:40683:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60314 dst: /127.0.0.1:40683 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:07,441 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 50966384-c066-4cf1-9e9f-8904981cd7e3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:07,443 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data1/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:07,443 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data2/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:07,460 WARN [Listener at localhost.localdomain/44809] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:07,466 WARN [Listener at localhost.localdomain/44809] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:07,468 INFO [Listener at localhost.localdomain/44809] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:07,473 INFO [Listener at localhost.localdomain/44809] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_44415_datanode____.ok408x/webapp 2023-05-31 08:00:07,537 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb10b9c76e15f4beb: Processing first storage report for DS-82ce75f2-750f-413d-8328-131b9627e067 from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 08:00:07,537 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb10b9c76e15f4beb: from storage DS-82ce75f2-750f-413d-8328-131b9627e067 node DatanodeRegistration(127.0.0.1:38541, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=34887, infoSecurePort=0, ipcPort=44809, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:07,537 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb10b9c76e15f4beb: Processing first storage report for DS-13871883-59e8-4505-8906-87d42f4e8a6e from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 08:00:07,537 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb10b9c76e15f4beb: from storage DS-13871883-59e8-4505-8906-87d42f4e8a6e node DatanodeRegistration(127.0.0.1:38541, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=34887, infoSecurePort=0, ipcPort=44809, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:07,552 INFO [Listener at localhost.localdomain/44809] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44415 2023-05-31 08:00:07,558 WARN [Listener at localhost.localdomain/45553] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:07,897 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a40e4a993e9c210: Processing first storage report for DS-1393a46f-cd72-46fd-aa17-d5732ae7149c from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 08:00:07,897 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a40e4a993e9c210: from storage DS-1393a46f-cd72-46fd-aa17-d5732ae7149c node DatanodeRegistration(127.0.0.1:41689, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=40719, infoSecurePort=0, ipcPort=45553, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:07,897 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1a40e4a993e9c210: Processing first storage report for DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 08:00:07,898 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1a40e4a993e9c210: from storage DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 node DatanodeRegistration(127.0.0.1:41689, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=40719, infoSecurePort=0, ipcPort=45553, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:08,563 INFO [Listener at localhost.localdomain/45553] wal.TestLogRolling(481): Data Nodes restarted 2023-05-31 08:00:08,567 INFO [Listener at localhost.localdomain/45553] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-31 08:00:08,570 WARN [RS:0;jenkins-hbase16:43665.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:08,571 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C43665%2C1685519993510:(num 1685519994031) roll requested 2023-05-31 08:00:08,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43665] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:08,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43665] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:42124 deadline: 1685520018569, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 08:00:08,583 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 newFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 2023-05-31 08:00:08,583 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-31 08:00:08,583 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 2023-05-31 08:00:08,585 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:41689,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 08:00:08,585 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 is not closed yet, will try archiving it next time 2023-05-31 08:00:08,585 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:08,585 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:20,616 INFO [Listener at localhost.localdomain/45553] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-31 08:00:22,619 WARN [Listener at localhost.localdomain/45553] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:22,621 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:41689,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-31 08:00:22,622 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:41689,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:41689,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]) is bad. 2023-05-31 08:00:22,622 WARN [PacketResponder: BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41689]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:22,622 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:47100 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38541:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47100 dst: /127.0.0.1:38541 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:22,658 INFO [Listener at localhost.localdomain/45553] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:22,766 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:47752 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:41689:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47752 dst: /127.0.0.1:41689 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:22,769 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:22,769 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 50966384-c066-4cf1-9e9f-8904981cd7e3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:22,770 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data1/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:22,770 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data2/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:22,778 WARN [Listener at localhost.localdomain/45553] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:22,780 WARN [Listener at localhost.localdomain/45553] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:22,781 INFO [Listener at localhost.localdomain/45553] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:22,788 INFO [Listener at localhost.localdomain/45553] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_35117_datanode____26dyvw/webapp 2023-05-31 08:00:22,858 INFO [Listener at localhost.localdomain/45553] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35117 2023-05-31 08:00:22,865 WARN [Listener at localhost.localdomain/41577] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:22,868 WARN [Listener at localhost.localdomain/41577] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:22,868 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:22,872 INFO [Listener at localhost.localdomain/41577] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:22,979 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-2054453182_17 at /127.0.0.1:43324 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38541:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43324 dst: /127.0.0.1:38541 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:22,982 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:22,982 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:22,983 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data3/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:22,984 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data4/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:22,994 WARN [Listener at localhost.localdomain/41577] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:22,996 WARN [Listener at localhost.localdomain/41577] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:22,998 INFO [Listener at localhost.localdomain/41577] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:23,003 INFO [Listener at localhost.localdomain/41577] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/java.io.tmpdir/Jetty_localhost_37595_datanode____ru3hck/webapp 2023-05-31 08:00:23,093 INFO [Listener at localhost.localdomain/41577] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37595 2023-05-31 08:00:23,101 WARN [Listener at localhost.localdomain/46609] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:23,237 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa01822b51ebd366c: Processing first storage report for DS-1393a46f-cd72-46fd-aa17-d5732ae7149c from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 08:00:23,237 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa01822b51ebd366c: from storage DS-1393a46f-cd72-46fd-aa17-d5732ae7149c node DatanodeRegistration(127.0.0.1:46265, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=39007, infoSecurePort=0, ipcPort=41577, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:23,237 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa01822b51ebd366c: Processing first storage report for DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 from datanode 50966384-c066-4cf1-9e9f-8904981cd7e3 2023-05-31 08:00:23,237 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa01822b51ebd366c: from storage DS-2b4a9d5c-3eaf-4c89-ae6e-0ce0d1f12202 node DatanodeRegistration(127.0.0.1:46265, datanodeUuid=50966384-c066-4cf1-9e9f-8904981cd7e3, infoPort=39007, infoSecurePort=0, ipcPort=41577, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:23,406 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x89339738e0c3e4d: Processing first storage report for DS-82ce75f2-750f-413d-8328-131b9627e067 from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 08:00:23,406 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x89339738e0c3e4d: from storage DS-82ce75f2-750f-413d-8328-131b9627e067 node DatanodeRegistration(127.0.0.1:36863, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=37547, infoSecurePort=0, ipcPort=46609, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:23,406 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x89339738e0c3e4d: Processing first storage report for DS-13871883-59e8-4505-8906-87d42f4e8a6e from datanode 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3 2023-05-31 08:00:23,407 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x89339738e0c3e4d: from storage DS-13871883-59e8-4505-8906-87d42f4e8a6e node DatanodeRegistration(127.0.0.1:36863, datanodeUuid=360c7dcb-78a8-48bb-bb72-b3ba6f449dc3, infoPort=37547, infoSecurePort=0, ipcPort=46609, storageInfo=lv=-57;cid=testClusterID;nsid=1067829212;c=1685519991994), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:23,801 WARN [master/jenkins-hbase16:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:23,802 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C33145%2C1685519993379:(num 1685519993638) roll requested 2023-05-31 08:00:23,802 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:23,804 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:23,816 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-31 08:00:23,816 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685519993638 with entries=88, filesize=43.81 KB; new WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685520023802 2023-05-31 08:00:23,816 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36863,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:46265,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 08:00:23,816 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:23,816 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685519993638 is not closed yet, will try archiving it next time 2023-05-31 08:00:23,816 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379/jenkins-hbase16.apache.org%2C33145%2C1685519993379.1685519993638; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:24,108 INFO [Listener at localhost.localdomain/46609] wal.TestLogRolling(498): Data Nodes restarted 2023-05-31 08:00:24,111 INFO [Listener at localhost.localdomain/46609] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-31 08:00:24,113 WARN [RS:0;jenkins-hbase16:43665.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:24,114 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C43665%2C1685519993510:(num 1685520008571) roll requested 2023-05-31 08:00:24,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43665] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:24,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43665] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:42124 deadline: 1685520034112, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 08:00:24,132 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 newFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 2023-05-31 08:00:24,132 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-31 08:00:24,132 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 2023-05-31 08:00:24,132 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46265,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK], DatanodeInfoWithStorage[127.0.0.1:36863,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]] 2023-05-31 08:00:24,132 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:24,132 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 is not closed yet, will try archiving it next time 2023-05-31 08:00:24,133 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38541,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:36,215 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 newFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 2023-05-31 08:00:36,216 INFO [Listener at localhost.localdomain/46609] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 2023-05-31 08:00:36,222 DEBUG [Listener at localhost.localdomain/46609] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36863,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:46265,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] 2023-05-31 08:00:36,222 DEBUG [Listener at localhost.localdomain/46609] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 is not closed yet, will try archiving it next time 2023-05-31 08:00:36,222 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 2023-05-31 08:00:36,223 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 2023-05-31 08:00:36,227 WARN [IPC Server handler 0 on default port 41475] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1015 2023-05-31 08:00:36,230 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 after 7ms 2023-05-31 08:00:36,435 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@3b3f945c] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1360413866-188.40.62.62-1685519991994:blk_1073741832_1015, datanode=DatanodeInfoWithStorage[127.0.0.1:36863,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1015, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data4/current/BP-1360413866-188.40.62.62-1685519991994/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:40,232 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 after 4009ms 2023-05-31 08:00:40,232 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685519994031 2023-05-31 08:00:40,253 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685519994695/Put/vlen=176/seqid=0] 2023-05-31 08:00:40,253 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #4: [default/info:d/1685519994813/Put/vlen=9/seqid=0] 2023-05-31 08:00:40,253 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #5: [hbase/info:d/1685519994852/Put/vlen=7/seqid=0] 2023-05-31 08:00:40,254 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685519995901/Put/vlen=232/seqid=0] 2023-05-31 08:00:40,254 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #4: [row1002/info:/1685520005080/Put/vlen=1045/seqid=0] 2023-05-31 08:00:40,254 DEBUG [Listener at localhost.localdomain/46609] wal.ProtobufLogReader(420): EOF at position 2162 2023-05-31 08:00:40,254 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 2023-05-31 08:00:40,254 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 2023-05-31 08:00:40,255 WARN [IPC Server handler 1 on default port 41475] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-31 08:00:40,255 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 after 1ms 2023-05-31 08:00:41,248 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@6ae74c87] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1360413866-188.40.62.62-1685519991994:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:46265,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data1/current/BP-1360413866-188.40.62.62-1685519991994/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:44,257 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 after 4003ms 2023-05-31 08:00:44,257 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520008571 2023-05-31 08:00:44,267 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #6: [row1003/info:/1685520018609/Put/vlen=1045/seqid=0] 2023-05-31 08:00:44,267 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #7: [row1004/info:/1685520020617/Put/vlen=1045/seqid=0] 2023-05-31 08:00:44,267 DEBUG [Listener at localhost.localdomain/46609] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-31 08:00:44,267 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 2023-05-31 08:00:44,267 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 2023-05-31 08:00:44,268 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 after 1ms 2023-05-31 08:00:44,268 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520024115 2023-05-31 08:00:44,272 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(522): #9: [row1005/info:/1685520034191/Put/vlen=1045/seqid=0] 2023-05-31 08:00:44,272 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 2023-05-31 08:00:44,272 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 2023-05-31 08:00:44,272 WARN [IPC Server handler 2 on default port 41475] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-31 08:00:44,273 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 after 1ms 2023-05-31 08:00:45,247 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-164752341_17 at /127.0.0.1:41900 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:36863:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41900 dst: /127.0.0.1:36863 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:36863 remote=/127.0.0.1:41900]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:45,250 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-164752341_17 at /127.0.0.1:38902 [Receiving block BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:46265:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38902 dst: /127.0.0.1:46265 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:45,250 WARN [ResponseProcessor for block BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-31 08:00:45,251 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 block BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36863,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK], DatanodeInfoWithStorage[127.0.0.1:46265,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36863,DS-82ce75f2-750f-413d-8328-131b9627e067,DISK]) is bad. 2023-05-31 08:00:45,260 WARN [DataStreamer for file /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 block BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,274 INFO [Listener at localhost.localdomain/46609] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 after 4002ms 2023-05-31 08:00:48,274 DEBUG [Listener at localhost.localdomain/46609] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 2023-05-31 08:00:48,283 DEBUG [Listener at localhost.localdomain/46609] wal.ProtobufLogReader(420): EOF at position 83 2023-05-31 08:00:48,285 INFO [Listener at localhost.localdomain/46609] regionserver.HRegion(2745): Flushing 75665e98cb5393fd33ec6fceea7403c5 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-31 08:00:48,286 WARN [RS:0;jenkins-hbase16:43665.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,286 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C43665%2C1685519993510:(num 1685520036198) roll requested 2023-05-31 08:00:48,286 DEBUG [Listener at localhost.localdomain/46609] regionserver.HRegion(2446): Flush status journal for 75665e98cb5393fd33ec6fceea7403c5: 2023-05-31 08:00:48,287 INFO [Listener at localhost.localdomain/46609] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,288 INFO [Listener at localhost.localdomain/46609] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-05-31 08:00:48,289 WARN [RS_OPEN_META-regionserver/jenkins-hbase16:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,289 DEBUG [Listener at localhost.localdomain/46609] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 08:00:48,289 INFO [Listener at localhost.localdomain/46609] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,290 INFO [Listener at localhost.localdomain/46609] regionserver.HRegion(2745): Flushing 8c17b1cd5a69f0f3c6183bb83ca1c326 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 08:00:48,291 DEBUG [Listener at localhost.localdomain/46609] regionserver.HRegion(2446): Flush status journal for 8c17b1cd5a69f0f3c6183bb83ca1c326: 2023-05-31 08:00:48,291 INFO [Listener at localhost.localdomain/46609] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,293 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 08:00:48,293 INFO [Listener at localhost.localdomain/46609] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 08:00:48,293 DEBUG [Listener at localhost.localdomain/46609] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x752a4c73 to 127.0.0.1:50938 2023-05-31 08:00:48,293 DEBUG [Listener at localhost.localdomain/46609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:00:48,294 DEBUG [Listener at localhost.localdomain/46609] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 08:00:48,294 DEBUG [Listener at localhost.localdomain/46609] util.JVMClusterUtil(257): Found active master hash=404071625, stopped=false 2023-05-31 08:00:48,300 INFO [Listener at localhost.localdomain/46609] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 08:00:48,303 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 newFile=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520048287 2023-05-31 08:00:48,303 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-31 08:00:48,303 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520048287 2023-05-31 08:00:48,303 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,303 ERROR [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198 failed. Cause="Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-31 08:00:48,303 ERROR [regionserver/jenkins-hbase16:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,304 ERROR [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,304 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 08:00:48,305 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-31 08:00:48,305 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 08:00:48,306 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,306 WARN [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:40683,DS-1393a46f-cd72-46fd-aa17-d5732ae7149c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,306 ERROR [regionserver/jenkins-hbase16:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase16.apache.org,43665,1685519993510: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,306 ERROR [regionserver/jenkins-hbase16:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-31 08:00:48,307 DEBUG [regionserver/jenkins-hbase16:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-31 08:00:48,307 DEBUG [regionserver/jenkins-hbase16:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-31 08:00:48,307 DEBUG [regionserver/jenkins-hbase16:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-31 08:00:48,307 DEBUG [regionserver/jenkins-hbase16:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-31 08:00:48,307 INFO [regionserver/jenkins-hbase16:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1075314688, "init": 524288000, "max": 2051014656, "used": 422715520 }, "NonHeapMemoryUsage": { "committed": 139526144, "init": 2555904, "max": -1, "used": 137028688 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-31 08:00:48,308 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33145] master.MasterRpcServices(609): jenkins-hbase16.apache.org,43665,1685519993510 reported a fatal error: ***** ABORTING region server jenkins-hbase16.apache.org,43665,1685519993510: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/WALs/jenkins-hbase16.apache.org,43665,1685519993510/jenkins-hbase16.apache.org%2C43665%2C1685519993510.1685520036198, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1360413866-188.40.62.62-1685519991994:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy31.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-31 08:00:48,309 INFO [regionserver/jenkins-hbase16:0.logRoller] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,43665,1685519993510' ***** 2023-05-31 08:00:48,309 INFO [regionserver/jenkins-hbase16:0.logRoller] regionserver.HRegionServer(2309): STOPPED: Failed log close in log roller 2023-05-31 08:00:48,309 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase16.apache.org%2C43665%2C1685519993510.meta:.meta(num 1685519994194) roll requested 2023-05-31 08:00:48,309 DEBUG [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-31 08:00:48,309 INFO [RS:0;jenkins-hbase16:43665] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 08:00:48,309 INFO [RS:0;jenkins-hbase16:43665] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager abruptly. 2023-05-31 08:00:48,310 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 08:00:48,310 INFO [RS:0;jenkins-hbase16:43665] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager abruptly. 2023-05-31 08:00:48,310 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:00:48,310 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(3303): Received CLOSE for 75665e98cb5393fd33ec6fceea7403c5 2023-05-31 08:00:48,310 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:00:48,310 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:48,310 INFO [Listener at localhost.localdomain/46609] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 08:00:48,310 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(3303): Received CLOSE for 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 08:00:48,311 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:00:48,311 DEBUG [Listener at localhost.localdomain/46609] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e04fc4e to 127.0.0.1:50938 2023-05-31 08:00:48,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 75665e98cb5393fd33ec6fceea7403c5, disabling compactions & flushes 2023-05-31 08:00:48,311 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:00:48,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,311 DEBUG [Listener at localhost.localdomain/46609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:00:48,311 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1141): aborting server jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 08:00:48,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,312 DEBUG [RS:0;jenkins-hbase16:43665] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f1e0a12 to 127.0.0.1:50938 2023-05-31 08:00:48,312 DEBUG [RS:0;jenkins-hbase16:43665] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:00:48,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. after waiting 0 ms 2023-05-31 08:00:48,312 INFO [RS:0;jenkins-hbase16:43665] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 08:00:48,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,312 INFO [RS:0;jenkins-hbase16:43665] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 08:00:48,312 INFO [RS:0;jenkins-hbase16:43665] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 08:00:48,312 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 08:00:48,313 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 08:00:48,313 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1825): Memstore data size is 4304 in region TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,313 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1478): Online Regions={75665e98cb5393fd33ec6fceea7403c5=TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5., 1588230740=hbase:meta,,1.1588230740, 8c17b1cd5a69f0f3c6183bb83ca1c326=hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326.} 2023-05-31 08:00:48,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:00:48,313 DEBUG [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1504): Waiting on 1588230740, 75665e98cb5393fd33ec6fceea7403c5, 8c17b1cd5a69f0f3c6183bb83ca1c326 2023-05-31 08:00:48,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,313 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:00:48,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:00:48,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 75665e98cb5393fd33ec6fceea7403c5: 2023-05-31 08:00:48,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685519995065.75665e98cb5393fd33ec6fceea7403c5. 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 8c17b1cd5a69f0f3c6183bb83ca1c326, disabling compactions & flushes 2023-05-31 08:00:48,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,314 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1825): Memstore data size is 3028 in region hbase:meta,,1.1588230740 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. after waiting 0 ms 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,314 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:00:48,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 08:00:48,314 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1825): Memstore data size is 78 in region hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 8c17b1cd5a69f0f3c6183bb83ca1c326: 2023-05-31 08:00:48,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685519994280.8c17b1cd5a69f0f3c6183bb83ca1c326. 2023-05-31 08:00:48,513 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,43665,1685519993510; all regions closed. 2023-05-31 08:00:48,514 DEBUG [RS:0;jenkins-hbase16:43665] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:00:48,514 INFO [RS:0;jenkins-hbase16:43665] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:00:48,514 INFO [RS:0;jenkins-hbase16:43665] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-31 08:00:48,514 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:00:48,516 INFO [RS:0;jenkins-hbase16:43665] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:43665 2023-05-31 08:00:48,527 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,43665,1685519993510 2023-05-31 08:00:48,527 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:00:48,527 ERROR [Listener at localhost.localdomain/41991-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@184d9859 rejected from java.util.concurrent.ThreadPoolExecutor@1c720f88[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-05-31 08:00:48,527 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:00:48,534 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,43665,1685519993510] 2023-05-31 08:00:48,535 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,43665,1685519993510; numProcessing=1 2023-05-31 08:00:48,543 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,43665,1685519993510 already deleted, retry=false 2023-05-31 08:00:48,543 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,43665,1685519993510 expired; onlineServers=0 2023-05-31 08:00:48,543 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,33145,1685519993379' ***** 2023-05-31 08:00:48,543 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 08:00:48,544 DEBUG [M:0;jenkins-hbase16:33145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c2a2313, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:00:48,544 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 08:00:48,544 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,33145,1685519993379; all regions closed. 2023-05-31 08:00:48,544 DEBUG [M:0;jenkins-hbase16:33145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:00:48,544 DEBUG [M:0;jenkins-hbase16:33145] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 08:00:48,544 DEBUG [M:0;jenkins-hbase16:33145] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 08:00:48,545 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519993801] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685519993801,5,FailOnTimeoutGroup] 2023-05-31 08:00:48,545 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519993801] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685519993801,5,FailOnTimeoutGroup] 2023-05-31 08:00:48,544 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 08:00:48,546 INFO [M:0;jenkins-hbase16:33145] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 08:00:48,548 INFO [M:0;jenkins-hbase16:33145] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 08:00:48,549 INFO [M:0;jenkins-hbase16:33145] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 08:00:48,549 DEBUG [M:0;jenkins-hbase16:33145] master.HMaster(1512): Stopping service threads 2023-05-31 08:00:48,549 INFO [M:0;jenkins-hbase16:33145] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 08:00:48,550 ERROR [M:0;jenkins-hbase16:33145] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 08:00:48,550 INFO [M:0;jenkins-hbase16:33145] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 08:00:48,550 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 08:00:48,556 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 08:00:48,556 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:48,556 DEBUG [M:0;jenkins-hbase16:33145] zookeeper.ZKUtil(398): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 08:00:48,556 WARN [M:0;jenkins-hbase16:33145] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 08:00:48,557 INFO [M:0;jenkins-hbase16:33145] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 08:00:48,557 INFO [M:0;jenkins-hbase16:33145] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 08:00:48,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:00:48,558 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:00:48,558 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:48,558 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:48,559 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:00:48,559 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:48,559 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.18 KB heapSize=45.83 KB 2023-05-31 08:00:48,580 INFO [M:0;jenkins-hbase16:33145] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.18 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d90d8140980548d195de83b38969bed7 2023-05-31 08:00:48,587 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d90d8140980548d195de83b38969bed7 as hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d90d8140980548d195de83b38969bed7 2023-05-31 08:00:48,593 INFO [M:0;jenkins-hbase16:33145] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41475/user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d90d8140980548d195de83b38969bed7, entries=11, sequenceid=92, filesize=7.0 K 2023-05-31 08:00:48,594 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegion(2948): Finished flush of dataSize ~38.18 KB/39101, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 35ms, sequenceid=92, compaction requested=false 2023-05-31 08:00:48,595 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:48,595 DEBUG [M:0;jenkins-hbase16:33145] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:00:48,595 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bb46d41a-41d3-b9e0-af2a-9dc0f1cd4e70/MasterData/WALs/jenkins-hbase16.apache.org,33145,1685519993379 2023-05-31 08:00:48,598 INFO [M:0;jenkins-hbase16:33145] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 08:00:48,598 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:00:48,599 INFO [M:0;jenkins-hbase16:33145] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:33145 2023-05-31 08:00:48,609 DEBUG [M:0;jenkins-hbase16:33145] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,33145,1685519993379 already deleted, retry=false 2023-05-31 08:00:48,705 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:00:48,705 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): regionserver:43665-0x100804048540001, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:00:48,705 INFO [RS:0;jenkins-hbase16:43665] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,43665,1685519993510; zookeeper connection closed. 2023-05-31 08:00:48,706 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c2a7301] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c2a7301 2023-05-31 08:00:48,713 INFO [Listener at localhost.localdomain/46609] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 08:00:48,805 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:00:48,805 INFO [M:0;jenkins-hbase16:33145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,33145,1685519993379; zookeeper connection closed. 2023-05-31 08:00:48,805 DEBUG [Listener at localhost.localdomain/41991-EventThread] zookeeper.ZKWatcher(600): master:33145-0x100804048540000, quorum=127.0.0.1:50938, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:00:48,808 WARN [Listener at localhost.localdomain/46609] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:48,816 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:48,923 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:48,923 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 360c7dcb-78a8-48bb-bb72-b3ba6f449dc3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:48,924 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data3/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:48,925 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data4/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:48,929 WARN [Listener at localhost.localdomain/46609] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:00:48,935 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:00:49,043 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:00:49,044 WARN [BP-1360413866-188.40.62.62-1685519991994 heartbeating to localhost.localdomain/127.0.0.1:41475] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1360413866-188.40.62.62-1685519991994 (Datanode Uuid 50966384-c066-4cf1-9e9f-8904981cd7e3) service to localhost.localdomain/127.0.0.1:41475 2023-05-31 08:00:49,045 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data1/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:49,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/cluster_8982dffa-ffc1-9574-7556-5b6601266e71/dfs/data/data2/current/BP-1360413866-188.40.62.62-1685519991994] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:00:49,058 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 08:00:49,176 INFO [Listener at localhost.localdomain/46609] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 08:00:49,190 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 08:00:49,199 INFO [Listener at localhost.localdomain/46609] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=88 (was 78) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:41475 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/46609 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:41475 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:41475 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41475 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2031846989) connection to localhost.localdomain/127.0.0.1:41475 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=461 (was 463), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=87 (was 128), ProcessCount=165 (was 165), AvailableMemoryMB=7709 (was 7879) 2023-05-31 08:00:49,206 INFO [Listener at localhost.localdomain/46609] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=88, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=87, ProcessCount=165, AvailableMemoryMB=7709 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/hadoop.log.dir so I do NOT create it in target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2670d051-68f6-e944-1f05-eab7c6c48bb8/hadoop.tmp.dir so I do NOT create it in target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726, deleteOnExit=true 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/test.cache.data in system properties and HBase conf 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 08:00:49,207 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/hadoop.log.dir in system properties and HBase conf 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 08:00:49,208 DEBUG [Listener at localhost.localdomain/46609] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:00:49,208 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/nfs.dump.dir in system properties and HBase conf 2023-05-31 08:00:49,209 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/java.io.tmpdir in system properties and HBase conf 2023-05-31 08:00:49,210 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:00:49,210 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 08:00:49,210 INFO [Listener at localhost.localdomain/46609] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 08:00:49,211 WARN [Listener at localhost.localdomain/46609] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:00:49,213 WARN [Listener at localhost.localdomain/46609] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:00:49,213 WARN [Listener at localhost.localdomain/46609] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:00:49,459 WARN [Listener at localhost.localdomain/46609] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:49,462 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:49,468 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/java.io.tmpdir/Jetty_localhost_localdomain_41309_hdfs____.prnosn/webapp 2023-05-31 08:00:49,537 INFO [Listener at localhost.localdomain/46609] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41309 2023-05-31 08:00:49,539 WARN [Listener at localhost.localdomain/46609] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:00:49,540 WARN [Listener at localhost.localdomain/46609] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:00:49,540 WARN [Listener at localhost.localdomain/46609] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:00:49,714 WARN [Listener at localhost.localdomain/43541] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:49,722 WARN [Listener at localhost.localdomain/43541] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:49,724 WARN [Listener at localhost.localdomain/43541] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:49,726 INFO [Listener at localhost.localdomain/43541] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:49,731 INFO [Listener at localhost.localdomain/43541] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/java.io.tmpdir/Jetty_localhost_42507_datanode____fwzoo5/webapp 2023-05-31 08:00:49,804 INFO [Listener at localhost.localdomain/43541] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42507 2023-05-31 08:00:49,808 WARN [Listener at localhost.localdomain/44231] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:49,818 WARN [Listener at localhost.localdomain/44231] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:00:49,819 WARN [Listener at localhost.localdomain/44231] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:00:49,820 INFO [Listener at localhost.localdomain/44231] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:00:49,824 INFO [Listener at localhost.localdomain/44231] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/java.io.tmpdir/Jetty_localhost_35819_datanode____oxp1gl/webapp 2023-05-31 08:00:49,897 INFO [Listener at localhost.localdomain/44231] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35819 2023-05-31 08:00:49,903 WARN [Listener at localhost.localdomain/35759] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:00:49,906 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:00:50,397 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6da2c07115d0d15: Processing first storage report for DS-cf5d2339-8e04-453c-977d-41bb762ec940 from datanode bb9e7741-0d3f-4f58-abd9-fd55cbc7042f 2023-05-31 08:00:50,397 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6da2c07115d0d15: from storage DS-cf5d2339-8e04-453c-977d-41bb762ec940 node DatanodeRegistration(127.0.0.1:43177, datanodeUuid=bb9e7741-0d3f-4f58-abd9-fd55cbc7042f, infoPort=43039, infoSecurePort=0, ipcPort=44231, storageInfo=lv=-57;cid=testClusterID;nsid=1736328124;c=1685520049214), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:50,397 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6da2c07115d0d15: Processing first storage report for DS-dbfcce3c-20ea-4ea5-bd77-00a8c4bdaa86 from datanode bb9e7741-0d3f-4f58-abd9-fd55cbc7042f 2023-05-31 08:00:50,397 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6da2c07115d0d15: from storage DS-dbfcce3c-20ea-4ea5-bd77-00a8c4bdaa86 node DatanodeRegistration(127.0.0.1:43177, datanodeUuid=bb9e7741-0d3f-4f58-abd9-fd55cbc7042f, infoPort=43039, infoSecurePort=0, ipcPort=44231, storageInfo=lv=-57;cid=testClusterID;nsid=1736328124;c=1685520049214), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:50,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x730ad80aab32baaa: Processing first storage report for DS-576b9927-00d2-4899-ae5a-953681c947cc from datanode d58fae8a-dde7-4daa-b641-1f8f10f9ec39 2023-05-31 08:00:50,555 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x730ad80aab32baaa: from storage DS-576b9927-00d2-4899-ae5a-953681c947cc node DatanodeRegistration(127.0.0.1:34371, datanodeUuid=d58fae8a-dde7-4daa-b641-1f8f10f9ec39, infoPort=43941, infoSecurePort=0, ipcPort=35759, storageInfo=lv=-57;cid=testClusterID;nsid=1736328124;c=1685520049214), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:50,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x730ad80aab32baaa: Processing first storage report for DS-0fa6d3b2-980f-4379-a24f-d17c56ecc68d from datanode d58fae8a-dde7-4daa-b641-1f8f10f9ec39 2023-05-31 08:00:50,555 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x730ad80aab32baaa: from storage DS-0fa6d3b2-980f-4379-a24f-d17c56ecc68d node DatanodeRegistration(127.0.0.1:34371, datanodeUuid=d58fae8a-dde7-4daa-b641-1f8f10f9ec39, infoPort=43941, infoSecurePort=0, ipcPort=35759, storageInfo=lv=-57;cid=testClusterID;nsid=1736328124;c=1685520049214), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:00:50,618 DEBUG [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0 2023-05-31 08:00:50,623 INFO [Listener at localhost.localdomain/35759] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/zookeeper_0, clientPort=61345, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 08:00:50,625 INFO [Listener at localhost.localdomain/35759] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61345 2023-05-31 08:00:50,626 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,628 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,642 INFO [Listener at localhost.localdomain/35759] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f with version=8 2023-05-31 08:00:50,642 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,644 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:00:50,645 INFO [Listener at localhost.localdomain/35759] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:00:50,646 INFO [Listener at localhost.localdomain/35759] ipc.NettyRpcServer(120): Bind to /188.40.62.62:46209 2023-05-31 08:00:50,646 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,647 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,648 INFO [Listener at localhost.localdomain/35759] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46209 connecting to ZooKeeper ensemble=127.0.0.1:61345 2023-05-31 08:00:50,693 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:462090x0, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:00:50,694 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46209-0x100804128030000 connected 2023-05-31 08:00:50,769 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:00:50,769 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:00:50,770 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:00:50,771 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46209 2023-05-31 08:00:50,772 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46209 2023-05-31 08:00:50,772 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46209 2023-05-31 08:00:50,773 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46209 2023-05-31 08:00:50,773 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46209 2023-05-31 08:00:50,773 INFO [Listener at localhost.localdomain/35759] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f, hbase.cluster.distributed=false 2023-05-31 08:00:50,785 INFO [Listener at localhost.localdomain/35759] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:00:50,785 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,786 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,786 INFO [Listener at localhost.localdomain/35759] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:00:50,786 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:00:50,786 INFO [Listener at localhost.localdomain/35759] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:00:50,786 INFO [Listener at localhost.localdomain/35759] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:00:50,787 INFO [Listener at localhost.localdomain/35759] ipc.NettyRpcServer(120): Bind to /188.40.62.62:43783 2023-05-31 08:00:50,788 INFO [Listener at localhost.localdomain/35759] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 08:00:50,788 DEBUG [Listener at localhost.localdomain/35759] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 08:00:50,789 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,790 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,791 INFO [Listener at localhost.localdomain/35759] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43783 connecting to ZooKeeper ensemble=127.0.0.1:61345 2023-05-31 08:00:50,801 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:437830x0, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:00:50,802 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): regionserver:437830x0, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:00:50,802 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43783-0x100804128030001 connected 2023-05-31 08:00:50,803 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:00:50,803 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:00:50,804 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43783 2023-05-31 08:00:50,804 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43783 2023-05-31 08:00:50,804 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43783 2023-05-31 08:00:50,805 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43783 2023-05-31 08:00:50,805 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43783 2023-05-31 08:00:50,806 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:50,817 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:00:50,818 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:50,826 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:00:50,826 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:00:50,826 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:50,827 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:00:50,829 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,46209,1685520050643 from backup master directory 2023-05-31 08:00:50,830 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:00:50,839 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:50,839 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:00:50,839 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:50,839 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:00:50,857 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/hbase.id with ID: 686bf7c3-7629-44ef-9458-8665a4646032 2023-05-31 08:00:50,876 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:50,884 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:50,894 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6d7bf884 to 127.0.0.1:61345 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:00:50,906 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47d5b125, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:00:50,907 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 08:00:50,907 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 08:00:50,907 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:00:50,909 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store-tmp 2023-05-31 08:00:50,916 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:50,917 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:00:50,917 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:50,917 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:50,917 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:00:50,917 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:50,917 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:00:50,917 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:00:50,918 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/WALs/jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:50,920 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C46209%2C1685520050643, suffix=, logDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/WALs/jenkins-hbase16.apache.org,46209,1685520050643, archiveDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/oldWALs, maxLogs=10 2023-05-31 08:00:50,925 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/WALs/jenkins-hbase16.apache.org,46209,1685520050643/jenkins-hbase16.apache.org%2C46209%2C1685520050643.1685520050920 2023-05-31 08:00:50,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK], DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK]] 2023-05-31 08:00:50,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:00:50,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:50,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,928 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,930 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 08:00:50,930 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 08:00:50,930 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:50,931 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,931 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,936 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:00:50,937 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:00:50,938 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=842434, jitterRate=0.07121092081069946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:00:50,938 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:00:50,938 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 08:00:50,939 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 08:00:50,939 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 08:00:50,939 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 08:00:50,939 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 08:00:50,940 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 08:00:50,940 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 08:00:50,941 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 08:00:50,943 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 08:00:50,955 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 08:00:50,956 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 08:00:50,956 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 08:00:50,956 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 08:00:50,957 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 08:00:50,967 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:50,968 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 08:00:50,968 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 08:00:50,969 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 08:00:50,976 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:00:50,976 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:00:50,976 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:50,976 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,46209,1685520050643, sessionid=0x100804128030000, setting cluster-up flag (Was=false) 2023-05-31 08:00:50,993 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:51,018 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 08:00:51,021 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:51,040 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:51,068 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 08:00:51,070 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:51,072 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.hbase-snapshot/.tmp 2023-05-31 08:00:51,078 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:00:51,079 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,081 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685520081081 2023-05-31 08:00:51,081 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 08:00:51,081 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 08:00:51,081 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 08:00:51,082 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:00:51,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 08:00:51,083 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 08:00:51,084 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:00:51,084 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520051084,5,FailOnTimeoutGroup] 2023-05-31 08:00:51,084 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520051084,5,FailOnTimeoutGroup] 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,084 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,095 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:00:51,096 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:00:51,096 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f 2023-05-31 08:00:51,105 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:51,107 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(951): ClusterId : 686bf7c3-7629-44ef-9458-8665a4646032 2023-05-31 08:00:51,108 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 08:00:51,108 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:00:51,111 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/info 2023-05-31 08:00:51,111 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:00:51,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:00:51,114 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:00:51,114 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:00:51,115 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,115 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:00:51,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/table 2023-05-31 08:00:51,117 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:00:51,118 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,118 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 08:00:51,118 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 08:00:51,118 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740 2023-05-31 08:00:51,120 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740 2023-05-31 08:00:51,122 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:00:51,124 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:00:51,126 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:00:51,126 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=784248, jitterRate=-0.0027777105569839478}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:00:51,126 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:00:51,126 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 08:00:51,126 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:00:51,127 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:00:51,127 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:00:51,128 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:00:51,128 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:00:51,128 DEBUG [RS:0;jenkins-hbase16:43783] zookeeper.ReadOnlyZKClient(139): Connect 0x5d0cb38b to 127.0.0.1:61345 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:00:51,128 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:00:51,128 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:00:51,130 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:00:51,130 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 08:00:51,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 08:00:51,132 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 08:00:51,133 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 08:00:51,140 DEBUG [RS:0;jenkins-hbase16:43783] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53cb565a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:00:51,140 DEBUG [RS:0;jenkins-hbase16:43783] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57f4d8fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:00:51,148 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:43783 2023-05-31 08:00:51,149 INFO [RS:0;jenkins-hbase16:43783] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 08:00:51,149 INFO [RS:0;jenkins-hbase16:43783] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 08:00:51,149 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 08:00:51,149 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,46209,1685520050643 with isa=jenkins-hbase16.apache.org/188.40.62.62:43783, startcode=1685520050785 2023-05-31 08:00:51,149 DEBUG [RS:0;jenkins-hbase16:43783] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 08:00:51,153 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:58797, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 08:00:51,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,155 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f 2023-05-31 08:00:51,155 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43541 2023-05-31 08:00:51,155 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 08:00:51,173 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:00:51,173 DEBUG [RS:0;jenkins-hbase16:43783] zookeeper.ZKUtil(162): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,173 WARN [RS:0;jenkins-hbase16:43783] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:00:51,173 INFO [RS:0;jenkins-hbase16:43783] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:00:51,174 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,174 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,43783,1685520050785] 2023-05-31 08:00:51,177 DEBUG [RS:0;jenkins-hbase16:43783] zookeeper.ZKUtil(162): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,178 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 08:00:51,178 INFO [RS:0;jenkins-hbase16:43783] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 08:00:51,179 INFO [RS:0;jenkins-hbase16:43783] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 08:00:51,180 INFO [RS:0;jenkins-hbase16:43783] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 08:00:51,180 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,180 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 08:00:51,181 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,182 DEBUG [RS:0;jenkins-hbase16:43783] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:00:51,183 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,183 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,184 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,194 INFO [RS:0;jenkins-hbase16:43783] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 08:00:51,194 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,43783,1685520050785-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,203 INFO [RS:0;jenkins-hbase16:43783] regionserver.Replication(203): jenkins-hbase16.apache.org,43783,1685520050785 started 2023-05-31 08:00:51,203 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,43783,1685520050785, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:43783, sessionid=0x100804128030001 2023-05-31 08:00:51,203 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 08:00:51,203 DEBUG [RS:0;jenkins-hbase16:43783] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,203 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,43783,1685520050785' 2023-05-31 08:00:51,203 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,43783,1685520050785' 2023-05-31 08:00:51,204 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 08:00:51,205 DEBUG [RS:0;jenkins-hbase16:43783] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 08:00:51,205 DEBUG [RS:0;jenkins-hbase16:43783] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 08:00:51,205 INFO [RS:0;jenkins-hbase16:43783] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 08:00:51,205 INFO [RS:0;jenkins-hbase16:43783] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 08:00:51,283 DEBUG [jenkins-hbase16:46209] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 08:00:51,284 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,43783,1685520050785, state=OPENING 2023-05-31 08:00:51,293 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 08:00:51,301 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:51,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,43783,1685520050785}] 2023-05-31 08:00:51,302 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:00:51,308 INFO [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C43783%2C1685520050785, suffix=, logDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785, archiveDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs, maxLogs=32 2023-05-31 08:00:51,320 INFO [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520051309 2023-05-31 08:00:51,320 DEBUG [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK], DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK]] 2023-05-31 08:00:51,459 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,459 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 08:00:51,465 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:50784, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 08:00:51,473 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 08:00:51,473 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:00:51,475 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C43783%2C1685520050785.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785, archiveDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs, maxLogs=32 2023-05-31 08:00:51,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.meta.1685520051475.meta 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK], DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK]] 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 08:00:51,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 08:00:51,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 08:00:51,483 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:00:51,484 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/info 2023-05-31 08:00:51,484 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/info 2023-05-31 08:00:51,484 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:00:51,485 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,485 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:00:51,485 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:00:51,486 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:00:51,486 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:00:51,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,486 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:00:51,488 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/table 2023-05-31 08:00:51,488 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/table 2023-05-31 08:00:51,488 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:00:51,488 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,489 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740 2023-05-31 08:00:51,490 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740 2023-05-31 08:00:51,493 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:00:51,494 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:00:51,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=780791, jitterRate=-0.007173791527748108}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:00:51,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:00:51,498 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685520051459 2023-05-31 08:00:51,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 08:00:51,502 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 08:00:51,504 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,43783,1685520050785, state=OPEN 2023-05-31 08:00:51,514 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 08:00:51,514 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:00:51,518 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 08:00:51,518 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,43783,1685520050785 in 212 msec 2023-05-31 08:00:51,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 08:00:51,524 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 388 msec 2023-05-31 08:00:51,528 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 451 msec 2023-05-31 08:00:51,528 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685520051528, completionTime=-1 2023-05-31 08:00:51,528 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 08:00:51,528 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 08:00:51,532 DEBUG [hconnection-0x1fa4bc20-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:00:51,535 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:50796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:00:51,537 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 08:00:51,537 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685520111537 2023-05-31 08:00:51,537 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685520171537 2023-05-31 08:00:51,537 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 9 msec 2023-05-31 08:00:51,586 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,46209,1685520050643-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,586 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,46209,1685520050643-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,586 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,46209,1685520050643-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,587 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:46209, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,587 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 08:00:51,587 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 08:00:51,588 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:00:51,589 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 08:00:51,590 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 08:00:51,594 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 08:00:51,595 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 08:00:51,598 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,599 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606 empty. 2023-05-31 08:00:51,600 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,600 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 08:00:51,618 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 08:00:51,619 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7ce20a5666592d82a2d138e63056f606, NAME => 'hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7ce20a5666592d82a2d138e63056f606, disabling compactions & flushes 2023-05-31 08:00:51,630 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. after waiting 0 ms 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:51,630 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:51,630 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7ce20a5666592d82a2d138e63056f606: 2023-05-31 08:00:51,633 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 08:00:51,634 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520051633"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520051633"}]},"ts":"1685520051633"} 2023-05-31 08:00:51,636 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 08:00:51,638 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 08:00:51,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520051638"}]},"ts":"1685520051638"} 2023-05-31 08:00:51,640 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 08:00:51,673 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7ce20a5666592d82a2d138e63056f606, ASSIGN}] 2023-05-31 08:00:51,676 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7ce20a5666592d82a2d138e63056f606, ASSIGN 2023-05-31 08:00:51,677 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7ce20a5666592d82a2d138e63056f606, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,43783,1685520050785; forceNewPlan=false, retain=false 2023-05-31 08:00:51,828 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7ce20a5666592d82a2d138e63056f606, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:51,828 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520051828"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520051828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520051828"}]},"ts":"1685520051828"} 2023-05-31 08:00:51,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 7ce20a5666592d82a2d138e63056f606, server=jenkins-hbase16.apache.org,43783,1685520050785}] 2023-05-31 08:00:51,986 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:51,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7ce20a5666592d82a2d138e63056f606, NAME => 'hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:00:51,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,988 INFO [StoreOpener-7ce20a5666592d82a2d138e63056f606-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,989 DEBUG [StoreOpener-7ce20a5666592d82a2d138e63056f606-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/info 2023-05-31 08:00:51,990 DEBUG [StoreOpener-7ce20a5666592d82a2d138e63056f606-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/info 2023-05-31 08:00:51,990 INFO [StoreOpener-7ce20a5666592d82a2d138e63056f606-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7ce20a5666592d82a2d138e63056f606 columnFamilyName info 2023-05-31 08:00:51,990 INFO [StoreOpener-7ce20a5666592d82a2d138e63056f606-1] regionserver.HStore(310): Store=7ce20a5666592d82a2d138e63056f606/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:51,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,992 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:00:51,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:00:51,997 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 7ce20a5666592d82a2d138e63056f606; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=818623, jitterRate=0.040933653712272644}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:00:51,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 7ce20a5666592d82a2d138e63056f606: 2023-05-31 08:00:51,998 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606., pid=6, masterSystemTime=1685520051983 2023-05-31 08:00:52,001 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:52,001 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:00:52,001 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=7ce20a5666592d82a2d138e63056f606, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:52,002 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520052001"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520052001"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520052001"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520052001"}]},"ts":"1685520052001"} 2023-05-31 08:00:52,005 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 08:00:52,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 7ce20a5666592d82a2d138e63056f606, server=jenkins-hbase16.apache.org,43783,1685520050785 in 173 msec 2023-05-31 08:00:52,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 08:00:52,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7ce20a5666592d82a2d138e63056f606, ASSIGN in 333 msec 2023-05-31 08:00:52,008 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 08:00:52,009 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520052009"}]},"ts":"1685520052009"} 2023-05-31 08:00:52,010 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 08:00:52,018 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 08:00:52,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 431 msec 2023-05-31 08:00:52,091 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 08:00:52,101 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:00:52,101 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:52,105 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 08:00:52,122 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:00:52,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 28 msec 2023-05-31 08:00:52,137 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 08:00:52,156 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:00:52,168 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 29 msec 2023-05-31 08:00:52,198 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 08:00:52,214 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 08:00:52,215 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.375sec 2023-05-31 08:00:52,215 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 08:00:52,215 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 08:00:52,215 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 08:00:52,215 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,46209,1685520050643-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 08:00:52,216 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,46209,1685520050643-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 08:00:52,221 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 08:00:52,308 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ReadOnlyZKClient(139): Connect 0x5568e635 to 127.0.0.1:61345 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:00:52,319 DEBUG [Listener at localhost.localdomain/35759] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79183a2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:00:52,322 DEBUG [hconnection-0x25a46c48-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:00:52,326 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:50808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:00:52,329 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:00:52,329 INFO [Listener at localhost.localdomain/35759] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:00:52,348 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 08:00:52,348 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:00:52,349 INFO [Listener at localhost.localdomain/35759] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 08:00:52,352 DEBUG [Listener at localhost.localdomain/35759] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 08:00:52,356 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:42666, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 08:00:52,358 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 08:00:52,358 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 08:00:52,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 08:00:52,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:00:52,363 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 08:00:52,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(697): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-31 08:00:52,364 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 08:00:52,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 08:00:52,367 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,368 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c empty. 2023-05-31 08:00:52,368 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,368 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-31 08:00:52,381 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 08:00:52,383 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => bcb63872e2a6df39a97b7e0f9611811c, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/.tmp 2023-05-31 08:00:52,389 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:52,389 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing bcb63872e2a6df39a97b7e0f9611811c, disabling compactions & flushes 2023-05-31 08:00:52,389 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,389 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,389 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. after waiting 0 ms 2023-05-31 08:00:52,389 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,390 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,390 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:00:52,392 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 08:00:52,393 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685520052392"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520052392"}]},"ts":"1685520052392"} 2023-05-31 08:00:52,394 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 08:00:52,395 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 08:00:52,395 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520052395"}]},"ts":"1685520052395"} 2023-05-31 08:00:52,397 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-31 08:00:52,416 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=bcb63872e2a6df39a97b7e0f9611811c, ASSIGN}] 2023-05-31 08:00:52,418 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=bcb63872e2a6df39a97b7e0f9611811c, ASSIGN 2023-05-31 08:00:52,420 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=bcb63872e2a6df39a97b7e0f9611811c, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,43783,1685520050785; forceNewPlan=false, retain=false 2023-05-31 08:00:52,571 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=bcb63872e2a6df39a97b7e0f9611811c, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:52,571 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685520052571"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520052571"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520052571"}]},"ts":"1685520052571"} 2023-05-31 08:00:52,575 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure bcb63872e2a6df39a97b7e0f9611811c, server=jenkins-hbase16.apache.org,43783,1685520050785}] 2023-05-31 08:00:52,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcb63872e2a6df39a97b7e0f9611811c, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:00:52,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:00:52,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,744 INFO [StoreOpener-bcb63872e2a6df39a97b7e0f9611811c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,746 DEBUG [StoreOpener-bcb63872e2a6df39a97b7e0f9611811c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info 2023-05-31 08:00:52,746 DEBUG [StoreOpener-bcb63872e2a6df39a97b7e0f9611811c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info 2023-05-31 08:00:52,746 INFO [StoreOpener-bcb63872e2a6df39a97b7e0f9611811c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcb63872e2a6df39a97b7e0f9611811c columnFamilyName info 2023-05-31 08:00:52,747 INFO [StoreOpener-bcb63872e2a6df39a97b7e0f9611811c-1] regionserver.HStore(310): Store=bcb63872e2a6df39a97b7e0f9611811c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:00:52,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:00:52,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:00:52,755 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened bcb63872e2a6df39a97b7e0f9611811c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=754950, jitterRate=-0.04003143310546875}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:00:52,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:00:52,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c., pid=11, masterSystemTime=1685520052730 2023-05-31 08:00:52,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:00:52,758 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=bcb63872e2a6df39a97b7e0f9611811c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:00:52,758 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685520052758"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520052758"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520052758"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520052758"}]},"ts":"1685520052758"} 2023-05-31 08:00:52,764 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 08:00:52,764 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure bcb63872e2a6df39a97b7e0f9611811c, server=jenkins-hbase16.apache.org,43783,1685520050785 in 186 msec 2023-05-31 08:00:52,766 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 08:00:52,766 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=bcb63872e2a6df39a97b7e0f9611811c, ASSIGN in 348 msec 2023-05-31 08:00:52,767 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 08:00:52,767 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520052767"}]},"ts":"1685520052767"} 2023-05-31 08:00:52,769 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-31 08:00:52,802 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 08:00:52,804 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 444 msec 2023-05-31 08:00:53,861 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 08:00:57,178 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 08:00:57,179 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 08:00:57,180 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:02,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 08:01:02,366 INFO [Listener at localhost.localdomain/35759] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-31 08:01:02,368 DEBUG [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:02,369 DEBUG [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:02,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(933): Client=jenkins//188.40.62.62 procedure request for: flush-table-proc 2023-05-31 08:01:02,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-31 08:01:02,389 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-31 08:01:02,389 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:02,390 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-31 08:01:02,390 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-31 08:01:02,391 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,391 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 08:01:02,400 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:02,401 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,401 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:02,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:02,401 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,401 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 08:01:02,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 08:01:02,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 08:01:02,403 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 08:01:02,403 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-31 08:01:02,406 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-31 08:01:02,406 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-31 08:01:02,406 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:02,407 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-31 08:01:02,408 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 08:01:02,408 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 08:01:02,408 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:02,409 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. started... 2023-05-31 08:01:02,409 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 7ce20a5666592d82a2d138e63056f606 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 08:01:02,425 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/.tmp/info/3752c397882d4aa590e50394f94b9366 2023-05-31 08:01:02,439 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/.tmp/info/3752c397882d4aa590e50394f94b9366 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/info/3752c397882d4aa590e50394f94b9366 2023-05-31 08:01:02,446 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/info/3752c397882d4aa590e50394f94b9366, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 08:01:02,447 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 7ce20a5666592d82a2d138e63056f606 in 38ms, sequenceid=6, compaction requested=false 2023-05-31 08:01:02,448 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 7ce20a5666592d82a2d138e63056f606: 2023-05-31 08:01:02,448 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:02,448 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 08:01:02,448 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 08:01:02,448 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,448 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-31 08:01:02,448 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-31 08:01:02,459 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,459 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,459 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,459 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:02,459 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:02,459 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,459 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 08:01:02,459 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:02,460 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:02,460 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,460 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,461 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:02,461 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-31 08:01:02,461 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-31 08:01:02,461 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@70f55181[Count = 0] remaining members to acquire global barrier 2023-05-31 08:01:02,461 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,472 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,472 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,472 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,472 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-31 08:01:02,472 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-31 08:01:02,473 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,473 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 08:01:02,473 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase16.apache.org,43783,1685520050785' in zk 2023-05-31 08:01:02,481 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-31 08:01:02,481 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,481 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:02,481 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,481 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:02,482 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:02,481 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-31 08:01:02,482 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:02,482 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:02,483 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,483 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,483 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:02,484 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,484 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,484 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase16.apache.org,43783,1685520050785': 2023-05-31 08:01:02,484 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-31 08:01:02,484 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-31 08:01:02,485 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 08:01:02,485 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 08:01:02,485 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-31 08:01:02,485 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 08:01:02,492 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,492 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,492 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:02,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:02,492 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:02,492 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:02,493 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,493 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:02,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:02,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,493 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:02,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,494 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,494 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:02,495 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-31 08:01:02,495 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-31 08:01:02,509 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:02,509 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:02,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-31 08:01:02,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 08:01:02,509 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,509 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:02,511 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-31 08:01:02,511 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-31 08:01:02,512 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:02,512 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:02,514 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-31 08:01:02,514 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 08:01:12,514 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 08:01:12,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 08:01:12,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(933): Client=jenkins//188.40.62.62 procedure request for: flush-table-proc 2023-05-31 08:01:12,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,541 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:12,541 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:12,542 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 08:01:12,542 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 08:01:12,542 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,542 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,563 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,563 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:12,564 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:12,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:12,564 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,564 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 08:01:12,564 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,564 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 08:01:12,565 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,565 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,565 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,565 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 08:01:12,565 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:12,566 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 08:01:12,566 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 08:01:12,566 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 08:01:12,566 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:12,566 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. started... 2023-05-31 08:01:12,566 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing bcb63872e2a6df39a97b7e0f9611811c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:01:12,586 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/70566624dd194d019a222cfafd11ead0 2023-05-31 08:01:12,594 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/70566624dd194d019a222cfafd11ead0 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0 2023-05-31 08:01:12,601 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0, entries=1, sequenceid=5, filesize=5.8 K 2023-05-31 08:01:12,602 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for bcb63872e2a6df39a97b7e0f9611811c in 36ms, sequenceid=5, compaction requested=false 2023-05-31 08:01:12,602 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:12,602 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:12,602 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 08:01:12,602 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 08:01:12,602 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,603 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 08:01:12,603 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 08:01:12,613 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,613 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,613 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:12,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:12,614 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,614 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 08:01:12,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:12,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:12,615 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,616 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:12,617 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 08:01:12,617 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@74cadb47[Count = 0] remaining members to acquire global barrier 2023-05-31 08:01:12,617 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 08:01:12,617 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,625 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,625 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,625 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,625 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 08:01:12,625 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 08:01:12,625 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase16.apache.org,43783,1685520050785' in zk 2023-05-31 08:01:12,626 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,626 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 08:01:12,634 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 08:01:12,634 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,634 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:12,634 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 08:01:12,634 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:12,635 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:12,636 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:12,637 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:12,638 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,639 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,640 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:12,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,641 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,643 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase16.apache.org,43783,1685520050785': 2023-05-31 08:01:12,643 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 08:01:12,643 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 08:01:12,643 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 08:01:12,644 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 08:01:12,644 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,644 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 08:01:12,674 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,674 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,674 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:12,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:12,674 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,674 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:12,675 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:12,676 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:12,676 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,676 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:12,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,677 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,678 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:12,679 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,680 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,680 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,681 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:12,682 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,683 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,697 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,697 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:12,697 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,697 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:12,697 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:12,697 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 08:01:12,697 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:12,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:12,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 08:01:12,697 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:12,697 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:12,699 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:12,699 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:12,699 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,699 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 08:01:22,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 08:01:22,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(933): Client=jenkins//188.40.62.62 procedure request for: flush-table-proc 2023-05-31 08:01:22,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 08:01:22,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,715 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:22,715 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:22,716 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 08:01:22,716 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 08:01:22,716 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,716 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,758 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:22,758 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:22,758 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:22,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:22,759 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:22,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,759 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 08:01:22,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 08:01:22,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,760 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,760 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 08:01:22,760 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:22,760 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 08:01:22,761 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:22,761 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 08:01:22,762 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 08:01:22,762 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 08:01:22,762 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:22,762 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. started... 2023-05-31 08:01:22,762 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing bcb63872e2a6df39a97b7e0f9611811c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:01:22,775 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/b1c017062d3f4f9d8660ff8fcadda050 2023-05-31 08:01:22,784 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/b1c017062d3f4f9d8660ff8fcadda050 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050 2023-05-31 08:01:22,790 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050, entries=1, sequenceid=9, filesize=5.8 K 2023-05-31 08:01:22,790 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for bcb63872e2a6df39a97b7e0f9611811c in 28ms, sequenceid=9, compaction requested=false 2023-05-31 08:01:22,790 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:22,790 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:22,790 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 08:01:22,791 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 08:01:22,791 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:22,791 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 08:01:22,791 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 08:01:23,021 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,021 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,021 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,021 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:23,021 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:23,022 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,022 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 08:01:23,022 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:23,024 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:23,025 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,026 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,028 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:23,029 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 08:01:23,029 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@31c85bba[Count = 0] remaining members to acquire global barrier 2023-05-31 08:01:23,029 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 08:01:23,029 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,046 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,047 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,047 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,047 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 08:01:23,047 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 08:01:23,047 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase16.apache.org,43783,1685520050785' in zk 2023-05-31 08:01:23,047 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,048 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 08:01:23,063 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 08:01:23,063 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,064 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:23,064 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,064 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:23,065 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:23,064 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 08:01:23,065 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:23,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:23,066 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:23,068 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,068 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,069 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase16.apache.org,43783,1685520050785': 2023-05-31 08:01:23,069 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 08:01:23,069 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 08:01:23,069 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 08:01:23,069 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 08:01:23,069 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,069 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 08:01:23,080 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,080 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:23,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:23,080 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,080 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:23,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,080 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:23,080 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,080 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:23,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:23,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,081 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:23,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,082 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,082 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:23,083 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,084 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,096 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:23,096 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,096 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:23,096 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:23,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:23,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 08:01:23,096 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 08:01:23,096 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:23,096 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:23,096 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,097 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 08:01:23,098 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 08:01:23,097 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,098 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:23,098 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,098 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,098 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:23,098 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:23,098 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:33,098 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 08:01:33,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 08:01:33,125 INFO [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520051309 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520093110 2023-05-31 08:01:33,126 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK], DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK]] 2023-05-31 08:01:33,126 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520051309 is not closed yet, will try archiving it next time 2023-05-31 08:01:33,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(933): Client=jenkins//188.40.62.62 procedure request for: flush-table-proc 2023-05-31 08:01:33,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 08:01:33,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,138 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:33,138 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:33,139 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 08:01:33,139 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 08:01:33,140 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,140 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,191 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:33,191 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,191 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:33,191 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:33,192 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,192 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 08:01:33,192 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,192 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,193 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 08:01:33,193 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,193 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,193 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 08:01:33,193 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,194 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 08:01:33,194 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:33,194 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 08:01:33,195 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 08:01:33,195 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 08:01:33,195 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:33,195 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. started... 2023-05-31 08:01:33,195 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing bcb63872e2a6df39a97b7e0f9611811c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:01:33,207 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/8b5ecaf49f3c4e268e870df0e7d9cc56 2023-05-31 08:01:33,214 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/8b5ecaf49f3c4e268e870df0e7d9cc56 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56 2023-05-31 08:01:33,220 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56, entries=1, sequenceid=13, filesize=5.8 K 2023-05-31 08:01:33,220 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for bcb63872e2a6df39a97b7e0f9611811c in 25ms, sequenceid=13, compaction requested=true 2023-05-31 08:01:33,221 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:33,221 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:33,221 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 08:01:33,221 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 08:01:33,221 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,221 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 08:01:33,221 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 08:01:33,229 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,229 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:33,229 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:33,229 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,230 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 08:01:33,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:33,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:33,230 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,231 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:33,231 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 08:01:33,231 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@5a10e6bc[Count = 0] remaining members to acquire global barrier 2023-05-31 08:01:33,231 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 08:01:33,231 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,241 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,241 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,241 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,241 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 08:01:33,241 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,241 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 08:01:33,241 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 08:01:33,241 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase16.apache.org,43783,1685520050785' in zk 2023-05-31 08:01:33,250 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 08:01:33,250 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:33,250 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 08:01:33,250 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,250 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,250 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:33,250 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:33,251 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:33,251 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:33,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,252 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:33,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,253 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase16.apache.org,43783,1685520050785': 2023-05-31 08:01:33,253 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 08:01:33,253 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 08:01:33,253 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 08:01:33,253 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 08:01:33,253 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,254 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 08:01:33,262 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,262 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,262 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:33,262 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,262 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,263 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,263 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,263 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,264 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:33,264 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,264 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,264 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,264 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:33,265 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,265 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:33,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 08:01:33,274 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:33,274 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:33,275 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:33,274 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:33,275 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 08:01:33,275 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,275 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 08:01:33,275 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,276 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:33,276 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:33,276 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:43,276 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 08:01:43,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 08:01:43,279 DEBUG [Listener at localhost.localdomain/35759] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:01:43,289 DEBUG [Listener at localhost.localdomain/35759] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:01:43,289 DEBUG [Listener at localhost.localdomain/35759] regionserver.HStore(1912): bcb63872e2a6df39a97b7e0f9611811c/info is initiating minor compaction (all files) 2023-05-31 08:01:43,290 INFO [Listener at localhost.localdomain/35759] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 08:01:43,290 INFO [Listener at localhost.localdomain/35759] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:43,291 INFO [Listener at localhost.localdomain/35759] regionserver.HRegion(2259): Starting compaction of bcb63872e2a6df39a97b7e0f9611811c/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:43,291 INFO [Listener at localhost.localdomain/35759] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56] into tmpdir=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp, totalSize=17.4 K 2023-05-31 08:01:43,292 DEBUG [Listener at localhost.localdomain/35759] compactions.Compactor(207): Compacting 70566624dd194d019a222cfafd11ead0, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685520072532 2023-05-31 08:01:43,293 DEBUG [Listener at localhost.localdomain/35759] compactions.Compactor(207): Compacting b1c017062d3f4f9d8660ff8fcadda050, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685520082705 2023-05-31 08:01:43,294 DEBUG [Listener at localhost.localdomain/35759] compactions.Compactor(207): Compacting 8b5ecaf49f3c4e268e870df0e7d9cc56, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685520093103 2023-05-31 08:01:43,310 INFO [Listener at localhost.localdomain/35759] throttle.PressureAwareThroughputController(145): bcb63872e2a6df39a97b7e0f9611811c#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:01:43,324 DEBUG [Listener at localhost.localdomain/35759] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/d1784e951e6d49b2aa1dddc5c18fc534 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/d1784e951e6d49b2aa1dddc5c18fc534 2023-05-31 08:01:43,330 INFO [Listener at localhost.localdomain/35759] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in bcb63872e2a6df39a97b7e0f9611811c/info of bcb63872e2a6df39a97b7e0f9611811c into d1784e951e6d49b2aa1dddc5c18fc534(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:01:43,330 DEBUG [Listener at localhost.localdomain/35759] regionserver.HRegion(2289): Compaction status journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:43,342 INFO [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520093110 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520103332 2023-05-31 08:01:43,342 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK], DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK]] 2023-05-31 08:01:43,342 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520093110 is not closed yet, will try archiving it next time 2023-05-31 08:01:43,342 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520051309 to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520051309 2023-05-31 08:01:43,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(933): Client=jenkins//188.40.62.62 procedure request for: flush-table-proc 2023-05-31 08:01:43,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-31 08:01:43,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,349 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:43,349 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:43,350 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-31 08:01:43,350 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-31 08:01:43,350 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,350 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,400 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,400 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:43,401 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:43,401 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:43,401 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,401 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-31 08:01:43,402 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-31 08:01:43,404 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,404 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,404 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-31 08:01:43,404 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,405 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-31 08:01:43,405 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-31 08:01:43,406 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-31 08:01:43,406 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-31 08:01:43,406 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-31 08:01:43,406 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:43,406 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. started... 2023-05-31 08:01:43,407 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing bcb63872e2a6df39a97b7e0f9611811c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:01:43,470 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/3b14d89f5d55417386b532e4856fc7ef 2023-05-31 08:01:43,477 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/3b14d89f5d55417386b532e4856fc7ef as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/3b14d89f5d55417386b532e4856fc7ef 2023-05-31 08:01:43,483 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/3b14d89f5d55417386b532e4856fc7ef, entries=1, sequenceid=18, filesize=5.8 K 2023-05-31 08:01:43,484 INFO [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for bcb63872e2a6df39a97b7e0f9611811c in 77ms, sequenceid=18, compaction requested=false 2023-05-31 08:01:43,484 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:43,484 DEBUG [rs(jenkins-hbase16.apache.org,43783,1685520050785)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:43,484 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-31 08:01:43,484 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-31 08:01:43,484 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,484 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-31 08:01:43,484 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-31 08:01:43,499 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,499 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:43,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:43,499 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,499 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-31 08:01:43,499 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:43,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:43,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,500 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,501 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:43,501 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase16.apache.org,43783,1685520050785' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-31 08:01:43,501 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@b3cd26[Count = 0] remaining members to acquire global barrier 2023-05-31 08:01:43,501 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-31 08:01:43,501 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,515 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,516 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,516 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,516 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-31 08:01:43,516 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-31 08:01:43,516 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase16.apache.org,43783,1685520050785' in zk 2023-05-31 08:01:43,516 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,516 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-31 08:01:43,524 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-31 08:01:43,524 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,524 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:43,525 DEBUG [member: 'jenkins-hbase16.apache.org,43783,1685520050785' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-31 08:01:43,524 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:43,525 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:43,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:43,526 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:43,527 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,528 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:43,529 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,530 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,531 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase16.apache.org,43783,1685520050785': 2023-05-31 08:01:43,531 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase16.apache.org,43783,1685520050785' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-31 08:01:43,531 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-31 08:01:43,531 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-31 08:01:43,531 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-31 08:01:43,531 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,531 INFO [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-31 08:01:43,540 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,541 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,541 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,541 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-31 08:01:43,541 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-31 08:01:43,541 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,541 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,541 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:43,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-31 08:01:43,542 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:43,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:43,542 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,542 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,543 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-31 08:01:43,544 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,545 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,545 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,546 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-31 08:01:43,547 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,548 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,557 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-31 08:01:43,557 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,557 DEBUG [(jenkins-hbase16.apache.org,46209,1685520050643)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-31 08:01:43,557 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-31 08:01:43,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:43,557 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-31 08:01:43,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-31 08:01:43,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-31 08:01:43,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-31 08:01:43,557 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,558 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-31 08:01:43,558 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-31 08:01:43,558 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:43,559 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-31 08:01:53,559 DEBUG [Listener at localhost.localdomain/35759] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-31 08:01:53,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46209] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-31 08:01:53,577 INFO [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520103332 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520113567 2023-05-31 08:01:53,577 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34371,DS-576b9927-00d2-4899-ae5a-953681c947cc,DISK], DatanodeInfoWithStorage[127.0.0.1:43177,DS-cf5d2339-8e04-453c-977d-41bb762ec940,DISK]] 2023-05-31 08:01:53,577 DEBUG [Listener at localhost.localdomain/35759] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520103332 is not closed yet, will try archiving it next time 2023-05-31 08:01:53,577 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 08:01:53,577 INFO [Listener at localhost.localdomain/35759] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 08:01:53,577 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520093110 to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs/jenkins-hbase16.apache.org%2C43783%2C1685520050785.1685520093110 2023-05-31 08:01:53,577 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5568e635 to 127.0.0.1:61345 2023-05-31 08:01:53,578 DEBUG [Listener at localhost.localdomain/35759] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:01:53,580 DEBUG [Listener at localhost.localdomain/35759] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 08:01:53,580 DEBUG [Listener at localhost.localdomain/35759] util.JVMClusterUtil(257): Found active master hash=753950071, stopped=false 2023-05-31 08:01:53,580 INFO [Listener at localhost.localdomain/35759] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:01:53,616 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:01:53,616 INFO [Listener at localhost.localdomain/35759] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 08:01:53,616 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:53,617 DEBUG [Listener at localhost.localdomain/35759] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6d7bf884 to 127.0.0.1:61345 2023-05-31 08:01:53,616 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:01:53,618 DEBUG [Listener at localhost.localdomain/35759] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:01:53,618 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:01:53,618 INFO [Listener at localhost.localdomain/35759] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,43783,1685520050785' ***** 2023-05-31 08:01:53,619 INFO [Listener at localhost.localdomain/35759] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 08:01:53,619 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:01:53,619 INFO [RS:0;jenkins-hbase16:43783] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 08:01:53,619 INFO [RS:0;jenkins-hbase16:43783] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 08:01:53,619 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 08:01:53,619 INFO [RS:0;jenkins-hbase16:43783] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 08:01:53,621 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(3303): Received CLOSE for bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(3303): Received CLOSE for 7ce20a5666592d82a2d138e63056f606 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:53,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing bcb63872e2a6df39a97b7e0f9611811c, disabling compactions & flushes 2023-05-31 08:01:53,622 DEBUG [RS:0;jenkins-hbase16:43783] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d0cb38b to 127.0.0.1:61345 2023-05-31 08:01:53,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:53,622 DEBUG [RS:0;jenkins-hbase16:43783] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:01:53,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 08:01:53,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. after waiting 0 ms 2023-05-31 08:01:53,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:53,622 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 08:01:53,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing bcb63872e2a6df39a97b7e0f9611811c 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:01:53,623 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-31 08:01:53,623 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, bcb63872e2a6df39a97b7e0f9611811c=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c., 7ce20a5666592d82a2d138e63056f606=hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606.} 2023-05-31 08:01:53,624 DEBUG [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1504): Waiting on 1588230740, 7ce20a5666592d82a2d138e63056f606, bcb63872e2a6df39a97b7e0f9611811c 2023-05-31 08:01:53,624 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:01:53,624 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:01:53,624 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:01:53,624 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:01:53,624 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:01:53,624 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-31 08:01:53,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/8acda803eaac420e849424454fd90028 2023-05-31 08:01:53,638 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/.tmp/info/ebf509ab6d484de79260e6b51c089735 2023-05-31 08:01:53,643 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/.tmp/info/8acda803eaac420e849424454fd90028 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8acda803eaac420e849424454fd90028 2023-05-31 08:01:53,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8acda803eaac420e849424454fd90028, entries=1, sequenceid=22, filesize=5.8 K 2023-05-31 08:01:53,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for bcb63872e2a6df39a97b7e0f9611811c in 27ms, sequenceid=22, compaction requested=true 2023-05-31 08:01:53,654 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56] to archive 2023-05-31 08:01:53,655 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 08:01:53,658 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0 to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/70566624dd194d019a222cfafd11ead0 2023-05-31 08:01:53,659 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/.tmp/table/22da996cb51f4e4197f7cf3e7b054b29 2023-05-31 08:01:53,660 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050 to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/b1c017062d3f4f9d8660ff8fcadda050 2023-05-31 08:01:53,661 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56 to hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/info/8b5ecaf49f3c4e268e870df0e7d9cc56 2023-05-31 08:01:53,669 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/bcb63872e2a6df39a97b7e0f9611811c/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-31 08:01:53,669 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/.tmp/info/ebf509ab6d484de79260e6b51c089735 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/info/ebf509ab6d484de79260e6b51c089735 2023-05-31 08:01:53,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for bcb63872e2a6df39a97b7e0f9611811c: 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685520052358.bcb63872e2a6df39a97b7e0f9611811c. 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 7ce20a5666592d82a2d138e63056f606, disabling compactions & flushes 2023-05-31 08:01:53,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. after waiting 0 ms 2023-05-31 08:01:53,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:53,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/namespace/7ce20a5666592d82a2d138e63056f606/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 08:01:53,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:53,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 7ce20a5666592d82a2d138e63056f606: 2023-05-31 08:01:53,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685520051587.7ce20a5666592d82a2d138e63056f606. 2023-05-31 08:01:53,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/info/ebf509ab6d484de79260e6b51c089735, entries=20, sequenceid=14, filesize=7.6 K 2023-05-31 08:01:53,677 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/.tmp/table/22da996cb51f4e4197f7cf3e7b054b29 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/table/22da996cb51f4e4197f7cf3e7b054b29 2023-05-31 08:01:53,682 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/table/22da996cb51f4e4197f7cf3e7b054b29, entries=4, sequenceid=14, filesize=4.9 K 2023-05-31 08:01:53,683 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 59ms, sequenceid=14, compaction requested=false 2023-05-31 08:01:53,689 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-31 08:01:53,690 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 08:01:53,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:01:53,690 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:01:53,690 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 08:01:53,824 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,43783,1685520050785; all regions closed. 2023-05-31 08:01:53,825 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:53,836 DEBUG [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs 2023-05-31 08:01:53,836 INFO [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C43783%2C1685520050785.meta:.meta(num 1685520051475) 2023-05-31 08:01:53,837 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/WALs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:53,845 DEBUG [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/oldWALs 2023-05-31 08:01:53,845 INFO [RS:0;jenkins-hbase16:43783] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C43783%2C1685520050785:(num 1685520113567) 2023-05-31 08:01:53,845 DEBUG [RS:0;jenkins-hbase16:43783] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:01:53,845 INFO [RS:0;jenkins-hbase16:43783] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:01:53,845 INFO [RS:0;jenkins-hbase16:43783] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 08:01:53,845 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:01:53,846 INFO [RS:0;jenkins-hbase16:43783] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:43783 2023-05-31 08:01:53,857 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,43783,1685520050785 2023-05-31 08:01:53,857 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:01:53,857 ERROR [Listener at localhost.localdomain/35759-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@370f1ff9 rejected from java.util.concurrent.ThreadPoolExecutor@33309dc0[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 34] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-05-31 08:01:53,857 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:01:53,865 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,43783,1685520050785] 2023-05-31 08:01:53,865 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,43783,1685520050785; numProcessing=1 2023-05-31 08:01:53,873 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,43783,1685520050785 already deleted, retry=false 2023-05-31 08:01:53,873 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,43783,1685520050785 expired; onlineServers=0 2023-05-31 08:01:53,873 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,46209,1685520050643' ***** 2023-05-31 08:01:53,873 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 08:01:53,874 DEBUG [M:0;jenkins-hbase16:46209] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28cfdc31, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:01:53,874 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:01:53,874 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,46209,1685520050643; all regions closed. 2023-05-31 08:01:53,875 DEBUG [M:0;jenkins-hbase16:46209] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:01:53,875 DEBUG [M:0;jenkins-hbase16:46209] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 08:01:53,875 DEBUG [M:0;jenkins-hbase16:46209] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 08:01:53,875 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520051084] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520051084,5,FailOnTimeoutGroup] 2023-05-31 08:01:53,875 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520051084] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520051084,5,FailOnTimeoutGroup] 2023-05-31 08:01:53,875 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 08:01:53,876 INFO [M:0;jenkins-hbase16:46209] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 08:01:53,878 INFO [M:0;jenkins-hbase16:46209] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 08:01:53,879 INFO [M:0;jenkins-hbase16:46209] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 08:01:53,879 DEBUG [M:0;jenkins-hbase16:46209] master.HMaster(1512): Stopping service threads 2023-05-31 08:01:53,879 INFO [M:0;jenkins-hbase16:46209] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 08:01:53,879 ERROR [M:0;jenkins-hbase16:46209] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 08:01:53,880 INFO [M:0;jenkins-hbase16:46209] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 08:01:53,880 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 08:01:53,887 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 08:01:53,887 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:53,887 DEBUG [M:0;jenkins-hbase16:46209] zookeeper.ZKUtil(398): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 08:01:53,887 WARN [M:0;jenkins-hbase16:46209] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 08:01:53,887 INFO [M:0;jenkins-hbase16:46209] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 08:01:53,888 INFO [M:0;jenkins-hbase16:46209] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 08:01:53,888 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:01:53,889 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:01:53,889 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:53,889 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:53,889 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:01:53,889 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:53,889 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.93 KB heapSize=47.38 KB 2023-05-31 08:01:53,903 INFO [M:0;jenkins-hbase16:46209] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.93 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/94d99cb8c9c945fcbad961abb28f3be2 2023-05-31 08:01:53,908 INFO [M:0;jenkins-hbase16:46209] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 94d99cb8c9c945fcbad961abb28f3be2 2023-05-31 08:01:53,909 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/94d99cb8c9c945fcbad961abb28f3be2 as hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/94d99cb8c9c945fcbad961abb28f3be2 2023-05-31 08:01:53,914 INFO [M:0;jenkins-hbase16:46209] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 94d99cb8c9c945fcbad961abb28f3be2 2023-05-31 08:01:53,914 INFO [M:0;jenkins-hbase16:46209] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43541/user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/94d99cb8c9c945fcbad961abb28f3be2, entries=11, sequenceid=100, filesize=6.1 K 2023-05-31 08:01:53,915 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegion(2948): Finished flush of dataSize ~38.93 KB/39866, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=100, compaction requested=false 2023-05-31 08:01:53,916 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:53,916 DEBUG [M:0;jenkins-hbase16:46209] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:01:53,916 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/14cabc37-0bee-3875-59a1-e7f68b725d9f/MasterData/WALs/jenkins-hbase16.apache.org,46209,1685520050643 2023-05-31 08:01:53,919 INFO [M:0;jenkins-hbase16:46209] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 08:01:53,919 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:01:53,920 INFO [M:0;jenkins-hbase16:46209] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:46209 2023-05-31 08:01:53,928 DEBUG [M:0;jenkins-hbase16:46209] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,46209,1685520050643 already deleted, retry=false 2023-05-31 08:01:53,965 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:01:53,966 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): regionserver:43783-0x100804128030001, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:01:53,965 INFO [RS:0;jenkins-hbase16:43783] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,43783,1685520050785; zookeeper connection closed. 2023-05-31 08:01:53,966 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@589a7ac0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@589a7ac0 2023-05-31 08:01:53,966 INFO [Listener at localhost.localdomain/35759] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 08:01:54,065 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:01:54,066 DEBUG [Listener at localhost.localdomain/35759-EventThread] zookeeper.ZKWatcher(600): master:46209-0x100804128030000, quorum=127.0.0.1:61345, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:01:54,065 INFO [M:0;jenkins-hbase16:46209] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,46209,1685520050643; zookeeper connection closed. 2023-05-31 08:01:54,066 WARN [Listener at localhost.localdomain/35759] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:01:54,071 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:01:54,178 WARN [BP-1548147713-188.40.62.62-1685520049214 heartbeating to localhost.localdomain/127.0.0.1:43541] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:01:54,178 WARN [BP-1548147713-188.40.62.62-1685520049214 heartbeating to localhost.localdomain/127.0.0.1:43541] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1548147713-188.40.62.62-1685520049214 (Datanode Uuid d58fae8a-dde7-4daa-b641-1f8f10f9ec39) service to localhost.localdomain/127.0.0.1:43541 2023-05-31 08:01:54,180 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/dfs/data/data3/current/BP-1548147713-188.40.62.62-1685520049214] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:01:54,181 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/dfs/data/data4/current/BP-1548147713-188.40.62.62-1685520049214] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:01:54,183 WARN [Listener at localhost.localdomain/35759] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:01:54,186 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:01:54,292 WARN [BP-1548147713-188.40.62.62-1685520049214 heartbeating to localhost.localdomain/127.0.0.1:43541] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:01:54,292 WARN [BP-1548147713-188.40.62.62-1685520049214 heartbeating to localhost.localdomain/127.0.0.1:43541] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1548147713-188.40.62.62-1685520049214 (Datanode Uuid bb9e7741-0d3f-4f58-abd9-fd55cbc7042f) service to localhost.localdomain/127.0.0.1:43541 2023-05-31 08:01:54,293 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/dfs/data/data1/current/BP-1548147713-188.40.62.62-1685520049214] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:01:54,294 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/cluster_40175e60-aa1e-db7a-ffa0-38e5c03ff726/dfs/data/data2/current/BP-1548147713-188.40.62.62-1685520049214] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:01:54,311 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 08:01:54,428 INFO [Listener at localhost.localdomain/35759] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 08:01:54,451 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 08:01:54,460 INFO [Listener at localhost.localdomain/35759] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=96 (was 88) - Thread LEAK? -, OpenFileDescriptor=502 (was 461) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=40 (was 87), ProcessCount=166 (was 165) - ProcessCount LEAK? -, AvailableMemoryMB=7578 (was 7709) 2023-05-31 08:01:54,468 INFO [Listener at localhost.localdomain/35759] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=97, OpenFileDescriptor=502, MaxFileDescriptor=60000, SystemLoadAverage=40, ProcessCount=166, AvailableMemoryMB=7578 2023-05-31 08:01:54,468 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/hadoop.log.dir so I do NOT create it in target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/fee27c59-1c59-c222-3150-26d2cf2e01c0/hadoop.tmp.dir so I do NOT create it in target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71, deleteOnExit=true 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/test.cache.data in system properties and HBase conf 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/hadoop.log.dir in system properties and HBase conf 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 08:01:54,469 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 08:01:54,470 DEBUG [Listener at localhost.localdomain/35759] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 08:01:54,470 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/nfs.dump.dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/java.io.tmpdir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 08:01:54,471 INFO [Listener at localhost.localdomain/35759] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 08:01:54,473 WARN [Listener at localhost.localdomain/35759] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:01:54,474 WARN [Listener at localhost.localdomain/35759] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:01:54,474 WARN [Listener at localhost.localdomain/35759] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:01:54,669 WARN [Listener at localhost.localdomain/35759] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:01:54,672 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:01:54,678 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/java.io.tmpdir/Jetty_localhost_localdomain_36477_hdfs____.2xc7rp/webapp 2023-05-31 08:01:54,748 INFO [Listener at localhost.localdomain/35759] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36477 2023-05-31 08:01:54,750 WARN [Listener at localhost.localdomain/35759] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:01:54,751 WARN [Listener at localhost.localdomain/35759] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:01:54,751 WARN [Listener at localhost.localdomain/35759] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:01:54,885 WARN [Listener at localhost.localdomain/36683] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:01:54,895 WARN [Listener at localhost.localdomain/36683] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:01:54,897 WARN [Listener at localhost.localdomain/36683] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:01:54,897 INFO [Listener at localhost.localdomain/36683] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:01:54,902 INFO [Listener at localhost.localdomain/36683] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/java.io.tmpdir/Jetty_localhost_33669_datanode____l6mmbk/webapp 2023-05-31 08:01:54,974 INFO [Listener at localhost.localdomain/36683] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33669 2023-05-31 08:01:54,978 WARN [Listener at localhost.localdomain/40917] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:01:54,988 WARN [Listener at localhost.localdomain/40917] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:01:54,990 WARN [Listener at localhost.localdomain/40917] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:01:54,992 INFO [Listener at localhost.localdomain/40917] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:01:54,996 INFO [Listener at localhost.localdomain/40917] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/java.io.tmpdir/Jetty_localhost_41363_datanode____erpg56/webapp 2023-05-31 08:01:55,070 INFO [Listener at localhost.localdomain/40917] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41363 2023-05-31 08:01:55,080 WARN [Listener at localhost.localdomain/39789] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:01:55,188 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:01:55,637 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbf2d3a9ed0234da2: Processing first storage report for DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505 from datanode 0ff8cb49-8536-4992-ac43-af2097cb89b6 2023-05-31 08:01:55,637 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbf2d3a9ed0234da2: from storage DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505 node DatanodeRegistration(127.0.0.1:45941, datanodeUuid=0ff8cb49-8536-4992-ac43-af2097cb89b6, infoPort=33833, infoSecurePort=0, ipcPort=40917, storageInfo=lv=-57;cid=testClusterID;nsid=1561457061;c=1685520114475), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:01:55,637 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbf2d3a9ed0234da2: Processing first storage report for DS-47d55185-25b2-44ef-8263-70964d880a38 from datanode 0ff8cb49-8536-4992-ac43-af2097cb89b6 2023-05-31 08:01:55,637 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbf2d3a9ed0234da2: from storage DS-47d55185-25b2-44ef-8263-70964d880a38 node DatanodeRegistration(127.0.0.1:45941, datanodeUuid=0ff8cb49-8536-4992-ac43-af2097cb89b6, infoPort=33833, infoSecurePort=0, ipcPort=40917, storageInfo=lv=-57;cid=testClusterID;nsid=1561457061;c=1685520114475), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:01:55,702 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87e9845d41f519bb: Processing first storage report for DS-2411467d-81bb-4690-9da8-12d424e3eea8 from datanode 553bed65-4b39-43b7-8e56-51d6a4569ec0 2023-05-31 08:01:55,702 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87e9845d41f519bb: from storage DS-2411467d-81bb-4690-9da8-12d424e3eea8 node DatanodeRegistration(127.0.0.1:45257, datanodeUuid=553bed65-4b39-43b7-8e56-51d6a4569ec0, infoPort=34737, infoSecurePort=0, ipcPort=39789, storageInfo=lv=-57;cid=testClusterID;nsid=1561457061;c=1685520114475), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:01:55,702 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87e9845d41f519bb: Processing first storage report for DS-5cdf1ed9-6672-4c3e-a01b-0a6e304404f2 from datanode 553bed65-4b39-43b7-8e56-51d6a4569ec0 2023-05-31 08:01:55,702 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87e9845d41f519bb: from storage DS-5cdf1ed9-6672-4c3e-a01b-0a6e304404f2 node DatanodeRegistration(127.0.0.1:45257, datanodeUuid=553bed65-4b39-43b7-8e56-51d6a4569ec0, infoPort=34737, infoSecurePort=0, ipcPort=39789, storageInfo=lv=-57;cid=testClusterID;nsid=1561457061;c=1685520114475), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:01:55,794 DEBUG [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290 2023-05-31 08:01:55,799 INFO [Listener at localhost.localdomain/39789] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/zookeeper_0, clientPort=61400, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 08:01:55,801 INFO [Listener at localhost.localdomain/39789] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61400 2023-05-31 08:01:55,801 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,802 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,820 INFO [Listener at localhost.localdomain/39789] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507 with version=8 2023-05-31 08:01:55,820 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 08:01:55,823 INFO [Listener at localhost.localdomain/39789] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:01:55,823 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,823 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,823 INFO [Listener at localhost.localdomain/39789] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:01:55,824 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,824 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:01:55,824 INFO [Listener at localhost.localdomain/39789] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:01:55,826 INFO [Listener at localhost.localdomain/39789] ipc.NettyRpcServer(120): Bind to /188.40.62.62:42479 2023-05-31 08:01:55,826 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,828 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,829 INFO [Listener at localhost.localdomain/39789] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42479 connecting to ZooKeeper ensemble=127.0.0.1:61400 2023-05-31 08:01:55,868 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:424790x0, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:01:55,870 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42479-0x1008042269b0000 connected 2023-05-31 08:01:55,941 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:01:55,943 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:01:55,944 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:01:55,946 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42479 2023-05-31 08:01:55,946 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42479 2023-05-31 08:01:55,947 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42479 2023-05-31 08:01:55,948 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42479 2023-05-31 08:01:55,949 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42479 2023-05-31 08:01:55,950 INFO [Listener at localhost.localdomain/39789] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507, hbase.cluster.distributed=false 2023-05-31 08:01:55,967 INFO [Listener at localhost.localdomain/39789] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:01:55,968 INFO [Listener at localhost.localdomain/39789] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:01:55,969 INFO [Listener at localhost.localdomain/39789] ipc.NettyRpcServer(120): Bind to /188.40.62.62:42933 2023-05-31 08:01:55,970 INFO [Listener at localhost.localdomain/39789] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 08:01:55,971 DEBUG [Listener at localhost.localdomain/39789] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 08:01:55,971 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,972 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:55,972 INFO [Listener at localhost.localdomain/39789] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42933 connecting to ZooKeeper ensemble=127.0.0.1:61400 2023-05-31 08:01:55,982 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:429330x0, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:01:55,983 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42933-0x1008042269b0001 connected 2023-05-31 08:01:55,983 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:01:55,984 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:01:55,985 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ZKUtil(164): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:01:55,985 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42933 2023-05-31 08:01:55,985 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42933 2023-05-31 08:01:55,986 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42933 2023-05-31 08:01:55,986 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42933 2023-05-31 08:01:55,986 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42933 2023-05-31 08:01:55,988 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:55,998 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:01:55,999 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,006 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:01:56,006 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:01:56,007 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,008 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:01:56,009 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,42479,1685520115822 from backup master directory 2023-05-31 08:01:56,010 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:01:56,020 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,020 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:01:56,021 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,020 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:01:56,039 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/hbase.id with ID: da7b93c7-0949-4682-a89f-a7f4255ced33 2023-05-31 08:01:56,049 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:56,056 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,064 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3eee5691 to 127.0.0.1:61400 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:01:56,074 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b29f177, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:01:56,074 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 08:01:56,074 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 08:01:56,075 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:01:56,076 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store-tmp 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:01:56,083 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:56,083 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:01:56,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:01:56,084 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/WALs/jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,086 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C42479%2C1685520115822, suffix=, logDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/WALs/jenkins-hbase16.apache.org,42479,1685520115822, archiveDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/oldWALs, maxLogs=10 2023-05-31 08:01:56,097 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/WALs/jenkins-hbase16.apache.org,42479,1685520115822/jenkins-hbase16.apache.org%2C42479%2C1685520115822.1685520116086 2023-05-31 08:01:56,097 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45941,DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505,DISK], DatanodeInfoWithStorage[127.0.0.1:45257,DS-2411467d-81bb-4690-9da8-12d424e3eea8,DISK]] 2023-05-31 08:01:56,097 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:01:56,098 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:56,098 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,098 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,100 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,101 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 08:01:56,102 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 08:01:56,102 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,103 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,103 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,105 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:01:56,110 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:01:56,110 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=712427, jitterRate=-0.09410269558429718}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:01:56,110 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:01:56,111 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 08:01:56,112 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 08:01:56,113 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 08:01:56,114 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 08:01:56,123 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 08:01:56,123 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 08:01:56,123 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 08:01:56,123 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 08:01:56,124 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 08:01:56,131 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,132 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 08:01:56,132 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 08:01:56,133 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 08:01:56,140 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:01:56,140 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:01:56,140 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,140 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,42479,1685520115822, sessionid=0x1008042269b0000, setting cluster-up flag (Was=false) 2023-05-31 08:01:56,156 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,181 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 08:01:56,182 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,198 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,231 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 08:01:56,233 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:56,234 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.hbase-snapshot/.tmp 2023-05-31 08:01:56,236 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:01:56,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685520146239 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 08:01:56,239 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 08:01:56,240 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,240 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:01:56,240 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 08:01:56,240 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 08:01:56,240 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 08:01:56,240 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 08:01:56,240 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 08:01:56,241 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 08:01:56,241 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520116241,5,FailOnTimeoutGroup] 2023-05-31 08:01:56,241 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520116241,5,FailOnTimeoutGroup] 2023-05-31 08:01:56,241 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,241 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 08:01:56,241 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,241 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,242 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:01:56,250 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:01:56,251 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:01:56,251 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507 2023-05-31 08:01:56,258 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:56,259 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:01:56,261 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info 2023-05-31 08:01:56,261 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:01:56,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:01:56,263 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:01:56,264 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:01:56,264 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,264 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:01:56,265 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/table 2023-05-31 08:01:56,266 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:01:56,266 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,267 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740 2023-05-31 08:01:56,268 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740 2023-05-31 08:01:56,270 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:01:56,271 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:01:56,274 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:01:56,274 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=875609, jitterRate=0.1133945882320404}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:01:56,275 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:01:56,275 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:01:56,275 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:01:56,277 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:01:56,277 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 08:01:56,277 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 08:01:56,278 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 08:01:56,280 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 08:01:56,289 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(951): ClusterId : da7b93c7-0949-4682-a89f-a7f4255ced33 2023-05-31 08:01:56,290 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 08:01:56,300 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 08:01:56,300 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 08:01:56,308 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 08:01:56,310 DEBUG [RS:0;jenkins-hbase16:42933] zookeeper.ReadOnlyZKClient(139): Connect 0x4d13788e to 127.0.0.1:61400 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:01:56,325 DEBUG [RS:0;jenkins-hbase16:42933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43d49f5c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:01:56,325 DEBUG [RS:0;jenkins-hbase16:42933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47eb13fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:01:56,338 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:42933 2023-05-31 08:01:56,338 INFO [RS:0;jenkins-hbase16:42933] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 08:01:56,338 INFO [RS:0;jenkins-hbase16:42933] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 08:01:56,338 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 08:01:56,339 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,42479,1685520115822 with isa=jenkins-hbase16.apache.org/188.40.62.62:42933, startcode=1685520115967 2023-05-31 08:01:56,339 DEBUG [RS:0;jenkins-hbase16:42933] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 08:01:56,341 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:60381, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 08:01:56,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,342 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507 2023-05-31 08:01:56,342 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36683 2023-05-31 08:01:56,342 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 08:01:56,353 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:01:56,353 DEBUG [RS:0;jenkins-hbase16:42933] zookeeper.ZKUtil(162): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,353 WARN [RS:0;jenkins-hbase16:42933] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:01:56,353 INFO [RS:0;jenkins-hbase16:42933] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:01:56,354 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,354 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,42933,1685520115967] 2023-05-31 08:01:56,359 DEBUG [RS:0;jenkins-hbase16:42933] zookeeper.ZKUtil(162): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,360 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 08:01:56,360 INFO [RS:0;jenkins-hbase16:42933] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 08:01:56,361 INFO [RS:0;jenkins-hbase16:42933] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 08:01:56,362 INFO [RS:0;jenkins-hbase16:42933] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 08:01:56,362 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,362 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 08:01:56,363 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,364 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,364 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,364 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 DEBUG [RS:0;jenkins-hbase16:42933] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:01:56,365 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,366 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,366 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,378 INFO [RS:0;jenkins-hbase16:42933] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 08:01:56,378 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42933,1685520115967-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,387 INFO [RS:0;jenkins-hbase16:42933] regionserver.Replication(203): jenkins-hbase16.apache.org,42933,1685520115967 started 2023-05-31 08:01:56,388 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,42933,1685520115967, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:42933, sessionid=0x1008042269b0001 2023-05-31 08:01:56,388 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 08:01:56,388 DEBUG [RS:0;jenkins-hbase16:42933] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,388 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,42933,1685520115967' 2023-05-31 08:01:56,388 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:01:56,388 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,42933,1685520115967' 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 08:01:56,389 DEBUG [RS:0;jenkins-hbase16:42933] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 08:01:56,389 INFO [RS:0;jenkins-hbase16:42933] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 08:01:56,389 INFO [RS:0;jenkins-hbase16:42933] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 08:01:56,430 DEBUG [jenkins-hbase16:42479] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 08:01:56,431 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,42933,1685520115967, state=OPENING 2023-05-31 08:01:56,440 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 08:01:56,448 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:56,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:01:56,449 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:01:56,491 INFO [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C42933%2C1685520115967, suffix=, logDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967, archiveDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs, maxLogs=32 2023-05-31 08:01:56,501 INFO [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520116492 2023-05-31 08:01:56,501 DEBUG [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45941,DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505,DISK], DatanodeInfoWithStorage[127.0.0.1:45257,DS-2411467d-81bb-4690-9da8-12d424e3eea8,DISK]] 2023-05-31 08:01:56,606 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,606 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 08:01:56,610 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 08:01:56,614 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 08:01:56,615 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:01:56,618 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C42933%2C1685520115967.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967, archiveDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs, maxLogs=32 2023-05-31 08:01:56,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.meta.1685520116618.meta 2023-05-31 08:01:56,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45257,DS-2411467d-81bb-4690-9da8-12d424e3eea8,DISK], DatanodeInfoWithStorage[127.0.0.1:45941,DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505,DISK]] 2023-05-31 08:01:56,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:01:56,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 08:01:56,627 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 08:01:56,627 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 08:01:56,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 08:01:56,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:56,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 08:01:56,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 08:01:56,629 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:01:56,631 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info 2023-05-31 08:01:56,631 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info 2023-05-31 08:01:56,631 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:01:56,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:01:56,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:01:56,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:01:56,634 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:01:56,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:01:56,637 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/table 2023-05-31 08:01:56,637 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/table 2023-05-31 08:01:56,638 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:01:56,638 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:56,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740 2023-05-31 08:01:56,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740 2023-05-31 08:01:56,642 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:01:56,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:01:56,644 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=831995, jitterRate=0.05793669819831848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:01:56,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:01:56,645 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685520116606 2023-05-31 08:01:56,648 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 08:01:56,649 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 08:01:56,649 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,42933,1685520115967, state=OPEN 2023-05-31 08:01:56,656 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 08:01:56,656 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:01:56,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 08:01:56,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,42933,1685520115967 in 207 msec 2023-05-31 08:01:56,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 08:01:56,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 381 msec 2023-05-31 08:01:56,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 426 msec 2023-05-31 08:01:56,662 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685520116661, completionTime=-1 2023-05-31 08:01:56,662 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 08:01:56,662 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 08:01:56,664 DEBUG [hconnection-0x38132172-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:01:56,665 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:01:56,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 08:01:56,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685520176667 2023-05-31 08:01:56,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685520236667 2023-05-31 08:01:56,667 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42479,1685520115822-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42479,1685520115822-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42479,1685520115822-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:42479, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 08:01:56,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:01:56,690 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 08:01:56,691 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 08:01:56,693 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 08:01:56,695 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 08:01:56,697 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:56,698 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424 empty. 2023-05-31 08:01:56,698 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:56,699 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 08:01:56,712 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 08:01:56,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8f30553b0d7dd52eeef80e45f1556424, NAME => 'hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp 2023-05-31 08:01:56,721 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:56,721 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8f30553b0d7dd52eeef80e45f1556424, disabling compactions & flushes 2023-05-31 08:01:56,721 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:56,721 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:56,722 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. after waiting 0 ms 2023-05-31 08:01:56,722 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:56,722 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:56,722 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8f30553b0d7dd52eeef80e45f1556424: 2023-05-31 08:01:56,724 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 08:01:56,724 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520116724"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520116724"}]},"ts":"1685520116724"} 2023-05-31 08:01:56,726 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 08:01:56,727 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 08:01:56,727 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520116727"}]},"ts":"1685520116727"} 2023-05-31 08:01:56,728 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 08:01:56,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f30553b0d7dd52eeef80e45f1556424, ASSIGN}] 2023-05-31 08:01:56,769 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f30553b0d7dd52eeef80e45f1556424, ASSIGN 2023-05-31 08:01:56,771 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f30553b0d7dd52eeef80e45f1556424, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,42933,1685520115967; forceNewPlan=false, retain=false 2023-05-31 08:01:56,923 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8f30553b0d7dd52eeef80e45f1556424, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:56,923 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520116922"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520116922"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520116922"}]},"ts":"1685520116922"} 2023-05-31 08:01:56,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 8f30553b0d7dd52eeef80e45f1556424, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:01:57,088 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:57,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f30553b0d7dd52eeef80e45f1556424, NAME => 'hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:01:57,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:57,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,093 INFO [StoreOpener-8f30553b0d7dd52eeef80e45f1556424-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,095 DEBUG [StoreOpener-8f30553b0d7dd52eeef80e45f1556424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/info 2023-05-31 08:01:57,095 DEBUG [StoreOpener-8f30553b0d7dd52eeef80e45f1556424-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/info 2023-05-31 08:01:57,095 INFO [StoreOpener-8f30553b0d7dd52eeef80e45f1556424-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f30553b0d7dd52eeef80e45f1556424 columnFamilyName info 2023-05-31 08:01:57,096 INFO [StoreOpener-8f30553b0d7dd52eeef80e45f1556424-1] regionserver.HStore(310): Store=8f30553b0d7dd52eeef80e45f1556424/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:57,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:01:57,107 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:01:57,108 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 8f30553b0d7dd52eeef80e45f1556424; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=749453, jitterRate=-0.04702146351337433}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:01:57,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 8f30553b0d7dd52eeef80e45f1556424: 2023-05-31 08:01:57,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424., pid=6, masterSystemTime=1685520117079 2023-05-31 08:01:57,115 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:57,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:01:57,116 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8f30553b0d7dd52eeef80e45f1556424, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:57,117 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520117116"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520117116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520117116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520117116"}]},"ts":"1685520117116"} 2023-05-31 08:01:57,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 08:01:57,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 8f30553b0d7dd52eeef80e45f1556424, server=jenkins-hbase16.apache.org,42933,1685520115967 in 192 msec 2023-05-31 08:01:57,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 08:01:57,122 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f30553b0d7dd52eeef80e45f1556424, ASSIGN in 355 msec 2023-05-31 08:01:57,123 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 08:01:57,123 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520117123"}]},"ts":"1685520117123"} 2023-05-31 08:01:57,125 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 08:01:57,132 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 08:01:57,134 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 443 msec 2023-05-31 08:01:57,193 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 08:01:57,203 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:01:57,204 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:57,212 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 08:01:57,228 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:01:57,239 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 27 msec 2023-05-31 08:01:57,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 08:01:57,256 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:01:57,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 23 msec 2023-05-31 08:01:57,295 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 08:01:57,311 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.290sec 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42479,1685520115822-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 08:01:57,312 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,42479,1685520115822-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 08:01:57,316 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 08:01:57,392 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ReadOnlyZKClient(139): Connect 0x043f18b4 to 127.0.0.1:61400 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:01:57,408 DEBUG [Listener at localhost.localdomain/39789] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c7c3dee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:01:57,411 DEBUG [hconnection-0x54f14bdb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:01:57,415 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:01:57,417 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:01:57,417 INFO [Listener at localhost.localdomain/39789] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:01:57,436 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 08:01:57,436 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:01:57,437 INFO [Listener at localhost.localdomain/39789] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 08:01:57,439 DEBUG [Listener at localhost.localdomain/39789] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-31 08:01:57,441 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:47600, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-31 08:01:57,443 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-31 08:01:57,443 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-31 08:01:57,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 08:01:57,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-31 08:01:57,448 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 08:01:57,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] master.MasterRpcServices(697): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-31 08:01:57,448 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 08:01:57,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 08:01:57,450 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,450 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95 empty. 2023-05-31 08:01:57,451 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,451 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-31 08:01:57,459 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-31 08:01:57,460 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4a58b26d7b304bf14530805684542a95, NAME => 'TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/.tmp 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 4a58b26d7b304bf14530805684542a95, disabling compactions & flushes 2023-05-31 08:01:57,468 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. after waiting 0 ms 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,468 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,468 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:01:57,471 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 08:01:57,472 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520117472"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520117472"}]},"ts":"1685520117472"} 2023-05-31 08:01:57,474 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 08:01:57,475 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 08:01:57,475 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520117475"}]},"ts":"1685520117475"} 2023-05-31 08:01:57,477 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-31 08:01:57,496 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, ASSIGN}] 2023-05-31 08:01:57,498 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, ASSIGN 2023-05-31 08:01:57,499 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,42933,1685520115967; forceNewPlan=false, retain=false 2023-05-31 08:01:57,650 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4a58b26d7b304bf14530805684542a95, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:57,650 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520117650"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520117650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520117650"}]},"ts":"1685520117650"} 2023-05-31 08:01:57,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:01:57,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4a58b26d7b304bf14530805684542a95, NAME => 'TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:01:57,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:01:57,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,816 INFO [StoreOpener-4a58b26d7b304bf14530805684542a95-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,818 DEBUG [StoreOpener-4a58b26d7b304bf14530805684542a95-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info 2023-05-31 08:01:57,819 DEBUG [StoreOpener-4a58b26d7b304bf14530805684542a95-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info 2023-05-31 08:01:57,819 INFO [StoreOpener-4a58b26d7b304bf14530805684542a95-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4a58b26d7b304bf14530805684542a95 columnFamilyName info 2023-05-31 08:01:57,820 INFO [StoreOpener-4a58b26d7b304bf14530805684542a95-1] regionserver.HStore(310): Store=4a58b26d7b304bf14530805684542a95/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:01:57,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 4a58b26d7b304bf14530805684542a95 2023-05-31 08:01:57,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:01:57,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 4a58b26d7b304bf14530805684542a95; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=748034, jitterRate=-0.0488257110118866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:01:57,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:01:57,833 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95., pid=11, masterSystemTime=1685520117805 2023-05-31 08:01:57,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,836 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:01:57,837 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4a58b26d7b304bf14530805684542a95, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:01:57,837 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520117837"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520117837"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520117837"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520117837"}]},"ts":"1685520117837"} 2023-05-31 08:01:57,843 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-31 08:01:57,843 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967 in 188 msec 2023-05-31 08:01:57,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-31 08:01:57,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, ASSIGN in 347 msec 2023-05-31 08:01:57,847 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 08:01:57,847 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520117847"}]},"ts":"1685520117847"} 2023-05-31 08:01:57,848 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-31 08:01:57,883 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 08:01:57,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 441 msec 2023-05-31 08:01:59,224 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 08:02:02,360 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-31 08:02:02,361 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-31 08:02:02,361 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-31 08:02:07,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42479] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-31 08:02:07,450 INFO [Listener at localhost.localdomain/39789] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-31 08:02:07,452 DEBUG [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-31 08:02:07,452 DEBUG [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:07,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:07,468 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:02:07,480 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/49ac38b9aa834aa1b671065501f0ce10 2023-05-31 08:02:07,489 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/49ac38b9aa834aa1b671065501f0ce10 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10 2023-05-31 08:02:07,495 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10, entries=7, sequenceid=11, filesize=12.1 K 2023-05-31 08:02:07,496 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 4a58b26d7b304bf14530805684542a95 in 28ms, sequenceid=11, compaction requested=false 2023-05-31 08:02:07,497 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:07,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:07,497 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-31 08:02:07,507 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=33 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/1f072e12bbb24fe8bff1fb309c18f5e0 2023-05-31 08:02:07,513 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/1f072e12bbb24fe8bff1fb309c18f5e0 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 2023-05-31 08:02:07,519 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0, entries=19, sequenceid=33, filesize=24.7 K 2023-05-31 08:02:07,520 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for 4a58b26d7b304bf14530805684542a95 in 23ms, sequenceid=33, compaction requested=false 2023-05-31 08:02:07,520 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:07,520 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.9 K, sizeToCheck=16.0 K 2023-05-31 08:02:07,520 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:07,520 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 because midkey is the same as first or last row 2023-05-31 08:02:09,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:09,509 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:02:09,530 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=43 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/b393660e47c94542b7774cb0bbf072ae 2023-05-31 08:02:09,538 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/b393660e47c94542b7774cb0bbf072ae as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae 2023-05-31 08:02:09,546 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae, entries=7, sequenceid=43, filesize=12.1 K 2023-05-31 08:02:09,547 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=14.71 KB/15064 for 4a58b26d7b304bf14530805684542a95 in 38ms, sequenceid=43, compaction requested=true 2023-05-31 08:02:09,547 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:09,547 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=49.0 K, sizeToCheck=16.0 K 2023-05-31 08:02:09,547 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:09,547 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 because midkey is the same as first or last row 2023-05-31 08:02:09,548 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:09,548 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:09,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:09,550 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-31 08:02:09,550 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 50141 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:09,551 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 4a58b26d7b304bf14530805684542a95/info is initiating minor compaction (all files) 2023-05-31 08:02:09,551 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4a58b26d7b304bf14530805684542a95/info in TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:09,552 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp, totalSize=49.0 K 2023-05-31 08:02:09,552 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 49ac38b9aa834aa1b671065501f0ce10, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685520127455 2023-05-31 08:02:09,554 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 1f072e12bbb24fe8bff1fb309c18f5e0, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=33, earliestPutTs=1685520127469 2023-05-31 08:02:09,555 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting b393660e47c94542b7774cb0bbf072ae, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685520127498 2023-05-31 08:02:09,579 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=62 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/894071892d574cf580ad6d2294dc89ec 2023-05-31 08:02:09,583 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4a58b26d7b304bf14530805684542a95#info#compaction#29 average throughput is 33.86 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:09,588 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/894071892d574cf580ad6d2294dc89ec as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec 2023-05-31 08:02:09,591 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 08:02:09,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] ipc.CallRunner(144): callId: 71 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:51762 deadline: 1685520139591, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:09,609 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec, entries=16, sequenceid=62, filesize=21.6 K 2023-05-31 08:02:09,610 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=13.66 KB/13988 for 4a58b26d7b304bf14530805684542a95 in 60ms, sequenceid=62, compaction requested=false 2023-05-31 08:02:09,610 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:09,610 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=70.6 K, sizeToCheck=16.0 K 2023-05-31 08:02:09,610 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:09,610 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 because midkey is the same as first or last row 2023-05-31 08:02:09,612 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/5fd73d71dfe741b09e022031412a9d7d as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d 2023-05-31 08:02:09,618 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4a58b26d7b304bf14530805684542a95/info of 4a58b26d7b304bf14530805684542a95 into 5fd73d71dfe741b09e022031412a9d7d(size=39.6 K), total size for store is 61.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:09,618 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:09,618 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95., storeName=4a58b26d7b304bf14530805684542a95/info, priority=13, startTime=1685520129548; duration=0sec 2023-05-31 08:02:09,619 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=61.2 K, sizeToCheck=16.0 K 2023-05-31 08:02:09,619 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:09,619 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d because midkey is the same as first or last row 2023-05-31 08:02:09,619 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:19,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:19,644 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=14.71 KB heapSize=16 KB 2023-05-31 08:02:19,662 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=14.71 KB at sequenceid=80 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/bfcc00cafb1d4cb1a6df338dd30a1aa8 2023-05-31 08:02:19,670 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/bfcc00cafb1d4cb1a6df338dd30a1aa8 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8 2023-05-31 08:02:19,677 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8, entries=14, sequenceid=80, filesize=19.5 K 2023-05-31 08:02:19,678 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~14.71 KB/15064, heapSize ~15.98 KB/16368, currentSize=1.05 KB/1076 for 4a58b26d7b304bf14530805684542a95 in 34ms, sequenceid=80, compaction requested=true 2023-05-31 08:02:19,678 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:19,678 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=80.7 K, sizeToCheck=16.0 K 2023-05-31 08:02:19,678 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:19,679 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d because midkey is the same as first or last row 2023-05-31 08:02:19,679 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:02:19,679 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:19,681 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 82626 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:19,681 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 4a58b26d7b304bf14530805684542a95/info is initiating minor compaction (all files) 2023-05-31 08:02:19,681 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4a58b26d7b304bf14530805684542a95/info in TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:19,681 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp, totalSize=80.7 K 2023-05-31 08:02:19,682 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 5fd73d71dfe741b09e022031412a9d7d, keycount=33, bloomtype=ROW, size=39.6 K, encoding=NONE, compression=NONE, seqNum=43, earliestPutTs=1685520127455 2023-05-31 08:02:19,682 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 894071892d574cf580ad6d2294dc89ec, keycount=16, bloomtype=ROW, size=21.6 K, encoding=NONE, compression=NONE, seqNum=62, earliestPutTs=1685520129510 2023-05-31 08:02:19,683 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting bfcc00cafb1d4cb1a6df338dd30a1aa8, keycount=14, bloomtype=ROW, size=19.5 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685520129551 2023-05-31 08:02:19,695 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4a58b26d7b304bf14530805684542a95#info#compaction#31 average throughput is 32.32 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:19,707 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/5ef81493f44d4a5eb098e5d274a65ead as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead 2023-05-31 08:02:19,713 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4a58b26d7b304bf14530805684542a95/info of 4a58b26d7b304bf14530805684542a95 into 5ef81493f44d4a5eb098e5d274a65ead(size=71.4 K), total size for store is 71.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:19,713 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:19,713 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95., storeName=4a58b26d7b304bf14530805684542a95/info, priority=13, startTime=1685520139679; duration=0sec 2023-05-31 08:02:19,713 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=71.4 K, sizeToCheck=16.0 K 2023-05-31 08:02:19,713 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-31 08:02:19,714 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:19,714 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:19,715 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42479] assignment.AssignmentManager(1140): Split request from jenkins-hbase16.apache.org,42933,1685520115967, parent={ENCODED => 4a58b26d7b304bf14530805684542a95, NAME => 'TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-31 08:02:19,720 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42479] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:19,726 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42479] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=4a58b26d7b304bf14530805684542a95, daughterA=cc28087a9523cd4b693a3e9f4a7a33a8, daughterB=5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:19,727 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=4a58b26d7b304bf14530805684542a95, daughterA=cc28087a9523cd4b693a3e9f4a7a33a8, daughterB=5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:19,727 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=4a58b26d7b304bf14530805684542a95, daughterA=cc28087a9523cd4b693a3e9f4a7a33a8, daughterB=5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:19,727 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=4a58b26d7b304bf14530805684542a95, daughterA=cc28087a9523cd4b693a3e9f4a7a33a8, daughterB=5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:19,738 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, UNASSIGN}] 2023-05-31 08:02:19,740 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, UNASSIGN 2023-05-31 08:02:19,742 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4a58b26d7b304bf14530805684542a95, regionState=CLOSING, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:19,742 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520139742"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520139742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520139742"}]},"ts":"1685520139742"} 2023-05-31 08:02:19,745 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:02:19,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.UnassignRegionHandler(111): Close 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:19,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 4a58b26d7b304bf14530805684542a95, disabling compactions & flushes 2023-05-31 08:02:19,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:19,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:19,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. after waiting 0 ms 2023-05-31 08:02:19,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:19,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 4a58b26d7b304bf14530805684542a95 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:02:19,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=85 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/0fe79b971bb64222b72563e1065df284 2023-05-31 08:02:19,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.tmp/info/0fe79b971bb64222b72563e1065df284 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/0fe79b971bb64222b72563e1065df284 2023-05-31 08:02:19,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/0fe79b971bb64222b72563e1065df284, entries=1, sequenceid=85, filesize=5.8 K 2023-05-31 08:02:19,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4a58b26d7b304bf14530805684542a95 in 29ms, sequenceid=85, compaction requested=false 2023-05-31 08:02:19,944 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8] to archive 2023-05-31 08:02:19,945 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 08:02:19,948 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/49ac38b9aa834aa1b671065501f0ce10 2023-05-31 08:02:19,949 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/1f072e12bbb24fe8bff1fb309c18f5e0 2023-05-31 08:02:19,950 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5fd73d71dfe741b09e022031412a9d7d 2023-05-31 08:02:19,952 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/b393660e47c94542b7774cb0bbf072ae 2023-05-31 08:02:19,953 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/894071892d574cf580ad6d2294dc89ec 2023-05-31 08:02:19,954 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/bfcc00cafb1d4cb1a6df338dd30a1aa8 2023-05-31 08:02:19,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=1 2023-05-31 08:02:19,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. 2023-05-31 08:02:19,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 4a58b26d7b304bf14530805684542a95: 2023-05-31 08:02:19,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.UnassignRegionHandler(149): Closed 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:19,967 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4a58b26d7b304bf14530805684542a95, regionState=CLOSED 2023-05-31 08:02:19,967 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520139967"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520139967"}]},"ts":"1685520139967"} 2023-05-31 08:02:19,970 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-31 08:02:19,970 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 4a58b26d7b304bf14530805684542a95, server=jenkins-hbase16.apache.org,42933,1685520115967 in 224 msec 2023-05-31 08:02:19,972 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-31 08:02:19,972 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=4a58b26d7b304bf14530805684542a95, UNASSIGN in 232 msec 2023-05-31 08:02:19,982 INFO [PEWorker-4] assignment.SplitTableRegionProcedure(694): pid=12 splitting 2 storefiles, region=4a58b26d7b304bf14530805684542a95, threads=2 2023-05-31 08:02:19,984 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/0fe79b971bb64222b72563e1065df284 for region: 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:19,984 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead for region: 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:19,992 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/0fe79b971bb64222b72563e1065df284, top=true 2023-05-31 08:02:20,001 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/.splits/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284 for child: 5ff2c972fd01aea5476201df2f7474d2, parent: 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:20,001 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/0fe79b971bb64222b72563e1065df284 for region: 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:20,017 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead for region: 4a58b26d7b304bf14530805684542a95 2023-05-31 08:02:20,017 DEBUG [PEWorker-4] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 4a58b26d7b304bf14530805684542a95 Daughter A: 1 storefiles, Daughter B: 2 storefiles. 2023-05-31 08:02:20,038 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 08:02:20,040 DEBUG [PEWorker-4] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/recovered.edits/88.seqid, newMaxSeqId=88, maxSeqId=-1 2023-05-31 08:02:20,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685520140041"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685520140041"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685520140041"}]},"ts":"1685520140041"} 2023-05-31 08:02:20,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520140041"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520140041"}]},"ts":"1685520140041"} 2023-05-31 08:02:20,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520140041"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520140041"}]},"ts":"1685520140041"} 2023-05-31 08:02:20,077 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42933] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-31 08:02:20,078 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-31 08:02:20,078 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-31 08:02:20,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc28087a9523cd4b693a3e9f4a7a33a8, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5ff2c972fd01aea5476201df2f7474d2, ASSIGN}] 2023-05-31 08:02:20,088 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc28087a9523cd4b693a3e9f4a7a33a8, ASSIGN 2023-05-31 08:02:20,088 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5ff2c972fd01aea5476201df2f7474d2, ASSIGN 2023-05-31 08:02:20,089 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc28087a9523cd4b693a3e9f4a7a33a8, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase16.apache.org,42933,1685520115967; forceNewPlan=false, retain=false 2023-05-31 08:02:20,089 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5ff2c972fd01aea5476201df2f7474d2, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase16.apache.org,42933,1685520115967; forceNewPlan=false, retain=false 2023-05-31 08:02:20,089 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/info/ed7f39f249ff4839a060c16983ad44d4 2023-05-31 08:02:20,102 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/table/6731b655fe2b4fe9a6e1e6b4c85350b1 2023-05-31 08:02:20,107 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/info/ed7f39f249ff4839a060c16983ad44d4 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info/ed7f39f249ff4839a060c16983ad44d4 2023-05-31 08:02:20,111 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info/ed7f39f249ff4839a060c16983ad44d4, entries=29, sequenceid=17, filesize=8.6 K 2023-05-31 08:02:20,112 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/table/6731b655fe2b4fe9a6e1e6b4c85350b1 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/table/6731b655fe2b4fe9a6e1e6b4c85350b1 2023-05-31 08:02:20,118 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/table/6731b655fe2b4fe9a6e1e6b4c85350b1, entries=4, sequenceid=17, filesize=4.8 K 2023-05-31 08:02:20,119 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 41ms, sequenceid=17, compaction requested=false 2023-05-31 08:02:20,120 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 08:02:20,240 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=cc28087a9523cd4b693a3e9f4a7a33a8, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:20,240 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5ff2c972fd01aea5476201df2f7474d2, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:20,241 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140240"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520140240"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520140240"}]},"ts":"1685520140240"} 2023-05-31 08:02:20,241 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140240"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520140240"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520140240"}]},"ts":"1685520140240"} 2023-05-31 08:02:20,243 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:02:20,244 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure cc28087a9523cd4b693a3e9f4a7a33a8, server=jenkins-hbase16.apache.org,42933,1685520115967}] 2023-05-31 08:02:20,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:20,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ff2c972fd01aea5476201df2f7474d2, NAME => 'TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-31 08:02:20,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:02:20,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,408 INFO [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,409 DEBUG [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info 2023-05-31 08:02:20,410 DEBUG [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info 2023-05-31 08:02:20,410 INFO [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ff2c972fd01aea5476201df2f7474d2 columnFamilyName info 2023-05-31 08:02:20,421 DEBUG [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-top 2023-05-31 08:02:20,427 DEBUG [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284 2023-05-31 08:02:20,427 INFO [StoreOpener-5ff2c972fd01aea5476201df2f7474d2-1] regionserver.HStore(310): Store=5ff2c972fd01aea5476201df2f7474d2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:02:20,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:20,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 5ff2c972fd01aea5476201df2f7474d2; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=710278, jitterRate=-0.09683595597743988}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:02:20,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:20,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., pid=17, masterSystemTime=1685520140396 2023-05-31 08:02:20,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:20,432 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-05-31 08:02:20,433 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:20,433 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:20,434 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:20,434 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-top, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=77.2 K 2023-05-31 08:02:20,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:20,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:20,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:02:20,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc28087a9523cd4b693a3e9f4a7a33a8, NAME => 'TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-31 08:02:20,434 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1685520127455 2023-05-31 08:02:20,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,435 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5ff2c972fd01aea5476201df2f7474d2, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:20,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:02:20,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,435 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140435"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520140435"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520140435"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520140435"}]},"ts":"1685520140435"} 2023-05-31 08:02:20,435 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685520139646 2023-05-31 08:02:20,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,437 INFO [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,438 DEBUG [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info 2023-05-31 08:02:20,438 DEBUG [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info 2023-05-31 08:02:20,438 INFO [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc28087a9523cd4b693a3e9f4a7a33a8 columnFamilyName info 2023-05-31 08:02:20,439 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-05-31 08:02:20,439 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 in 194 msec 2023-05-31 08:02:20,441 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=5ff2c972fd01aea5476201df2f7474d2, ASSIGN in 352 msec 2023-05-31 08:02:20,444 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#35 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:20,448 DEBUG [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-bottom 2023-05-31 08:02:20,452 INFO [StoreOpener-cc28087a9523cd4b693a3e9f4a7a33a8-1] regionserver.HStore(310): Store=cc28087a9523cd4b693a3e9f4a7a33a8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:02:20,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:02:20,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened cc28087a9523cd4b693a3e9f4a7a33a8; next sequenceid=89; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=803213, jitterRate=0.02133876085281372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:02:20,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for cc28087a9523cd4b693a3e9f4a7a33a8: 2023-05-31 08:02:20,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8., pid=18, masterSystemTime=1685520140396 2023-05-31 08:02:20,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:02:20,462 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-31 08:02:20,462 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:02:20,462 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/e1f9ec7e79a3499991cde7f912e97f96 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/e1f9ec7e79a3499991cde7f912e97f96 2023-05-31 08:02:20,462 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HStore(1912): cc28087a9523cd4b693a3e9f4a7a33a8/info is initiating minor compaction (all files) 2023-05-31 08:02:20,462 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HRegion(2259): Starting compaction of cc28087a9523cd4b693a3e9f4a7a33a8/info in TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:02:20,463 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-bottom] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/.tmp, totalSize=71.4 K 2023-05-31 08:02:20,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:02:20,463 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] compactions.Compactor(207): Compacting 5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95, keycount=31, bloomtype=ROW, size=71.4 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1685520127455 2023-05-31 08:02:20,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:02:20,464 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=cc28087a9523cd4b693a3e9f4a7a33a8, regionState=OPEN, openSeqNum=89, regionLocation=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:20,464 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685520140464"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520140464"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520140464"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520140464"}]},"ts":"1685520140464"} 2023-05-31 08:02:20,469 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-05-31 08:02:20,469 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure cc28087a9523cd4b693a3e9f4a7a33a8, server=jenkins-hbase16.apache.org,42933,1685520115967 in 222 msec 2023-05-31 08:02:20,471 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-31 08:02:20,472 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 2 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into e1f9ec7e79a3499991cde7f912e97f96(size=8.1 K), total size for store is 8.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:20,473 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:20,473 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=14, startTime=1685520140432; duration=0sec 2023-05-31 08:02:20,473 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:20,473 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-05-31 08:02:20,473 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] throttle.PressureAwareThroughputController(145): cc28087a9523cd4b693a3e9f4a7a33a8#info#compaction#36 average throughput is 15.65 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:20,473 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=cc28087a9523cd4b693a3e9f4a7a33a8, ASSIGN in 382 msec 2023-05-31 08:02:20,475 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=4a58b26d7b304bf14530805684542a95, daughterA=cc28087a9523cd4b693a3e9f4a7a33a8, daughterB=5ff2c972fd01aea5476201df2f7474d2 in 753 msec 2023-05-31 08:02:20,488 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/.tmp/info/7ad87d73280c48dd9fa65d6de5b8c3f1 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/7ad87d73280c48dd9fa65d6de5b8c3f1 2023-05-31 08:02:20,495 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in cc28087a9523cd4b693a3e9f4a7a33a8/info of cc28087a9523cd4b693a3e9f4a7a33a8 into 7ad87d73280c48dd9fa65d6de5b8c3f1(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:20,495 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for cc28087a9523cd4b693a3e9f4a7a33a8: 2023-05-31 08:02:20,495 INFO [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8., storeName=cc28087a9523cd4b693a3e9f4a7a33a8/info, priority=15, startTime=1685520140460; duration=0sec 2023-05-31 08:02:20,495 DEBUG [RS:0;jenkins-hbase16:42933-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:21,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:51762 deadline: 1685520151650, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685520117443.4a58b26d7b304bf14530805684542a95. is not online on jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:25,527 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 08:02:31,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:31,681 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:02:31,753 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=99 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/efa446e1aaa24fddb5e1427e37b6f564 2023-05-31 08:02:31,760 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/efa446e1aaa24fddb5e1427e37b6f564 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564 2023-05-31 08:02:31,768 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564, entries=7, sequenceid=99, filesize=12.1 K 2023-05-31 08:02:31,769 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 5ff2c972fd01aea5476201df2f7474d2 in 88ms, sequenceid=99, compaction requested=false 2023-05-31 08:02:31,769 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:31,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:31,770 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-31 08:02:31,778 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=122 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/15b92107552c425c901a13e4f9f8a4f2 2023-05-31 08:02:31,784 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/15b92107552c425c901a13e4f9f8a4f2 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2 2023-05-31 08:02:31,789 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2, entries=20, sequenceid=122, filesize=25.8 K 2023-05-31 08:02:31,790 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 5ff2c972fd01aea5476201df2f7474d2 in 19ms, sequenceid=122, compaction requested=true 2023-05-31 08:02:31,790 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:31,790 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:02:31,790 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:31,791 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 47068 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:31,791 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:31,791 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:31,791 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/e1f9ec7e79a3499991cde7f912e97f96, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=46.0 K 2023-05-31 08:02:31,792 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting e1f9ec7e79a3499991cde7f912e97f96, keycount=3, bloomtype=ROW, size=8.1 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685520129590 2023-05-31 08:02:31,792 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting efa446e1aaa24fddb5e1427e37b6f564, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=99, earliestPutTs=1685520151674 2023-05-31 08:02:31,792 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 15b92107552c425c901a13e4f9f8a4f2, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=122, earliestPutTs=1685520151682 2023-05-31 08:02:31,801 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#39 average throughput is 30.78 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:31,811 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/32d11720b07c42e5a121aee943fdfb9d as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/32d11720b07c42e5a121aee943fdfb9d 2023-05-31 08:02:31,817 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into 32d11720b07c42e5a121aee943fdfb9d(size=36.6 K), total size for store is 36.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:31,817 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:31,817 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520151790; duration=0sec 2023-05-31 08:02:31,817 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:33,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:33,779 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:02:33,795 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=133 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/f9a612bcd878430cad9654769b338636 2023-05-31 08:02:33,803 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/f9a612bcd878430cad9654769b338636 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636 2023-05-31 08:02:33,811 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636, entries=7, sequenceid=133, filesize=12.1 K 2023-05-31 08:02:33,812 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=13.66 KB/13988 for 5ff2c972fd01aea5476201df2f7474d2 in 33ms, sequenceid=133, compaction requested=false 2023-05-31 08:02:33,812 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:33,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:33,814 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=14.71 KB heapSize=16 KB 2023-05-31 08:02:33,842 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=14.71 KB at sequenceid=150 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/a58e68552c0142508a83f3def80d76f6 2023-05-31 08:02:33,844 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 08:02:33,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] ipc.CallRunner(144): callId: 142 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:51762 deadline: 1685520163844, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:33,848 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/a58e68552c0142508a83f3def80d76f6 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6 2023-05-31 08:02:33,854 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6, entries=14, sequenceid=150, filesize=19.5 K 2023-05-31 08:02:33,855 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~14.71 KB/15064, heapSize ~15.98 KB/16368, currentSize=15.76 KB/16140 for 5ff2c972fd01aea5476201df2f7474d2 in 41ms, sequenceid=150, compaction requested=true 2023-05-31 08:02:33,855 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:33,855 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:33,855 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:33,857 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 69860 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:33,857 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:33,857 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:33,857 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/32d11720b07c42e5a121aee943fdfb9d, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=68.2 K 2023-05-31 08:02:33,858 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 32d11720b07c42e5a121aee943fdfb9d, keycount=30, bloomtype=ROW, size=36.6 K, encoding=NONE, compression=NONE, seqNum=122, earliestPutTs=1685520129590 2023-05-31 08:02:33,859 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting f9a612bcd878430cad9654769b338636, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=133, earliestPutTs=1685520151770 2023-05-31 08:02:33,859 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting a58e68552c0142508a83f3def80d76f6, keycount=14, bloomtype=ROW, size=19.5 K, encoding=NONE, compression=NONE, seqNum=150, earliestPutTs=1685520153779 2023-05-31 08:02:33,871 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#42 average throughput is 52.33 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:33,890 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/6a805af92bd44c4fb7265e409bd4a4d5 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6a805af92bd44c4fb7265e409bd4a4d5 2023-05-31 08:02:33,896 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into 6a805af92bd44c4fb7265e409bd4a4d5(size=58.8 K), total size for store is 58.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:33,896 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:33,896 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520153855; duration=0sec 2023-05-31 08:02:33,896 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:34,854 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=33, reuseRatio=71.74% 2023-05-31 08:02:34,854 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-31 08:02:41,918 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-31 08:02:43,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:43,929 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-31 08:02:43,939 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=170 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/3de289c76a4c49f091c217e9ad2b3697 2023-05-31 08:02:43,945 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/3de289c76a4c49f091c217e9ad2b3697 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697 2023-05-31 08:02:43,950 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697, entries=16, sequenceid=170, filesize=21.6 K 2023-05-31 08:02:43,951 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=0 B/0 for 5ff2c972fd01aea5476201df2f7474d2 in 22ms, sequenceid=170, compaction requested=false 2023-05-31 08:02:43,951 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:45,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:45,937 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:02:45,945 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=180 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/ad1e1b51773a42c28a72d9117909448c 2023-05-31 08:02:45,951 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/ad1e1b51773a42c28a72d9117909448c as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c 2023-05-31 08:02:45,957 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c, entries=7, sequenceid=180, filesize=12.1 K 2023-05-31 08:02:45,958 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 5ff2c972fd01aea5476201df2f7474d2 in 21ms, sequenceid=180, compaction requested=true 2023-05-31 08:02:45,958 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:45,958 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:45,958 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:45,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:45,960 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 08:02:45,961 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 94768 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:45,961 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:45,961 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:45,961 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6a805af92bd44c4fb7265e409bd4a4d5, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=92.5 K 2023-05-31 08:02:45,962 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 6a805af92bd44c4fb7265e409bd4a4d5, keycount=51, bloomtype=ROW, size=58.8 K, encoding=NONE, compression=NONE, seqNum=150, earliestPutTs=1685520129590 2023-05-31 08:02:45,962 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 3de289c76a4c49f091c217e9ad2b3697, keycount=16, bloomtype=ROW, size=21.6 K, encoding=NONE, compression=NONE, seqNum=170, earliestPutTs=1685520153814 2023-05-31 08:02:45,962 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting ad1e1b51773a42c28a72d9117909448c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=180, earliestPutTs=1685520165930 2023-05-31 08:02:45,981 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=201 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/889da8520ee6488b87b8e68e51b21cdb 2023-05-31 08:02:45,984 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#46 average throughput is 75.94 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:46,000 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/889da8520ee6488b87b8e68e51b21cdb as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb 2023-05-31 08:02:46,003 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/71bf532339aa4f9fa1399a05a7b242a4 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/71bf532339aa4f9fa1399a05a7b242a4 2023-05-31 08:02:46,005 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb, entries=18, sequenceid=201, filesize=23.7 K 2023-05-31 08:02:46,007 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=7.36 KB/7532 for 5ff2c972fd01aea5476201df2f7474d2 in 47ms, sequenceid=201, compaction requested=false 2023-05-31 08:02:46,007 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:46,010 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into 71bf532339aa4f9fa1399a05a7b242a4(size=83.3 K), total size for store is 107.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:46,010 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:46,010 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520165958; duration=0sec 2023-05-31 08:02:46,010 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:47,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:47,974 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-31 08:02:47,987 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=213 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/99c744b955cf483ead154e56d0f952f5 2023-05-31 08:02:47,992 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/99c744b955cf483ead154e56d0f952f5 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5 2023-05-31 08:02:47,997 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5, entries=8, sequenceid=213, filesize=13.2 K 2023-05-31 08:02:47,998 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=17.86 KB/18292 for 5ff2c972fd01aea5476201df2f7474d2 in 25ms, sequenceid=213, compaction requested=true 2023-05-31 08:02:47,998 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:47,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:47,998 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:02:47,998 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:47,998 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-31 08:02:47,999 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 123035 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:48,000 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:48,000 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:48,000 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/71bf532339aa4f9fa1399a05a7b242a4, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=120.2 K 2023-05-31 08:02:48,000 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 71bf532339aa4f9fa1399a05a7b242a4, keycount=74, bloomtype=ROW, size=83.3 K, encoding=NONE, compression=NONE, seqNum=180, earliestPutTs=1685520129590 2023-05-31 08:02:48,001 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 889da8520ee6488b87b8e68e51b21cdb, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=201, earliestPutTs=1685520165937 2023-05-31 08:02:48,001 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 99c744b955cf483ead154e56d0f952f5, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685520165960 2023-05-31 08:02:48,018 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=234 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/7e080c63fded448f873e9cf0c4c2acb3 2023-05-31 08:02:48,019 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 08:02:48,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:51762 deadline: 1685520178018, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:02:48,021 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#49 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:48,024 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/7e080c63fded448f873e9cf0c4c2acb3 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3 2023-05-31 08:02:48,029 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3, entries=18, sequenceid=234, filesize=23.7 K 2023-05-31 08:02:48,030 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 5ff2c972fd01aea5476201df2f7474d2 in 32ms, sequenceid=234, compaction requested=false 2023-05-31 08:02:48,030 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:48,044 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/cd37a67c6b5d4df7bf41ed2544fa4261 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/cd37a67c6b5d4df7bf41ed2544fa4261 2023-05-31 08:02:48,049 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into cd37a67c6b5d4df7bf41ed2544fa4261(size=110.7 K), total size for store is 134.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:48,049 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:48,050 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520167998; duration=0sec 2023-05-31 08:02:48,050 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:02:58,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:02:58,039 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-31 08:02:58,052 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=250 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/50030127258b4f04b659d4f49f32d527 2023-05-31 08:02:58,057 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/50030127258b4f04b659d4f49f32d527 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527 2023-05-31 08:02:58,061 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527, entries=12, sequenceid=250, filesize=17.4 K 2023-05-31 08:02:58,062 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 5ff2c972fd01aea5476201df2f7474d2 in 23ms, sequenceid=250, compaction requested=true 2023-05-31 08:02:58,062 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:58,062 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:02:58,062 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:02:58,063 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155485 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:02:58,063 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:02:58,064 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:02:58,064 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/cd37a67c6b5d4df7bf41ed2544fa4261, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=151.8 K 2023-05-31 08:02:58,064 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting cd37a67c6b5d4df7bf41ed2544fa4261, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=213, earliestPutTs=1685520129590 2023-05-31 08:02:58,064 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 7e080c63fded448f873e9cf0c4c2acb3, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=234, earliestPutTs=1685520167975 2023-05-31 08:02:58,065 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 50030127258b4f04b659d4f49f32d527, keycount=12, bloomtype=ROW, size=17.4 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685520167999 2023-05-31 08:02:58,077 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#51 average throughput is 33.35 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:02:58,085 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/3c51cd8ad238418e9d67aca6e20d57dc as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3c51cd8ad238418e9d67aca6e20d57dc 2023-05-31 08:02:58,091 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into 3c51cd8ad238418e9d67aca6e20d57dc(size=142.6 K), total size for store is 142.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:02:58,091 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:02:58,091 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520178062; duration=0sec 2023-05-31 08:02:58,091 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:03:00,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:00,048 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-31 08:03:00,057 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=261 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/8d7ab9189ffc45cf9548bb16b8802fbe 2023-05-31 08:03:00,064 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/8d7ab9189ffc45cf9548bb16b8802fbe as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe 2023-05-31 08:03:00,071 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe, entries=7, sequenceid=261, filesize=12.1 K 2023-05-31 08:03:00,071 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 5ff2c972fd01aea5476201df2f7474d2 in 23ms, sequenceid=261, compaction requested=false 2023-05-31 08:03:00,072 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:00,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:00,072 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-31 08:03:00,080 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=281 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/21da6ff9f0f642dfb593fdee399a7969 2023-05-31 08:03:00,085 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/21da6ff9f0f642dfb593fdee399a7969 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969 2023-05-31 08:03:00,090 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969, entries=17, sequenceid=281, filesize=22.7 K 2023-05-31 08:03:00,090 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for 5ff2c972fd01aea5476201df2f7474d2 in 18ms, sequenceid=281, compaction requested=true 2023-05-31 08:03:00,091 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:00,091 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:03:00,091 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:03:00,092 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 181686 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:03:00,092 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:03:00,092 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:00,092 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3c51cd8ad238418e9d67aca6e20d57dc, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=177.4 K 2023-05-31 08:03:00,092 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 3c51cd8ad238418e9d67aca6e20d57dc, keycount=130, bloomtype=ROW, size=142.6 K, encoding=NONE, compression=NONE, seqNum=250, earliestPutTs=1685520129590 2023-05-31 08:03:00,093 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 8d7ab9189ffc45cf9548bb16b8802fbe, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=261, earliestPutTs=1685520178040 2023-05-31 08:03:00,093 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 21da6ff9f0f642dfb593fdee399a7969, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685520180049 2023-05-31 08:03:00,104 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#54 average throughput is 79.01 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:03:00,521 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/028a55db51ec4c9684c4c3263f4edb77 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/028a55db51ec4c9684c4c3263f4edb77 2023-05-31 08:03:00,531 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into 028a55db51ec4c9684c4c3263f4edb77(size=168.0 K), total size for store is 168.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:03:00,531 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:00,531 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520180091; duration=0sec 2023-05-31 08:03:00,531 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:03:02,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:02,086 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-31 08:03:02,100 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=295 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/6bdfc2d353e44f4a98c31c77278df7a4 2023-05-31 08:03:02,108 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/6bdfc2d353e44f4a98c31c77278df7a4 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4 2023-05-31 08:03:02,114 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4, entries=10, sequenceid=295, filesize=15.3 K 2023-05-31 08:03:02,115 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=14.71 KB/15064 for 5ff2c972fd01aea5476201df2f7474d2 in 29ms, sequenceid=295, compaction requested=false 2023-05-31 08:03:02,115 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:02,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:02,115 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=15.76 KB heapSize=17.13 KB 2023-05-31 08:03:02,125 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.76 KB at sequenceid=313 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/1309aade26a747c986174d96ccc82d7a 2023-05-31 08:03:02,132 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/1309aade26a747c986174d96ccc82d7a as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a 2023-05-31 08:03:02,136 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-31 08:03:02,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 188.40.62.62:51762 deadline: 1685520192136, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=5ff2c972fd01aea5476201df2f7474d2, server=jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:03:02,138 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a, entries=15, sequenceid=313, filesize=20.6 K 2023-05-31 08:03:02,138 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~15.76 KB/16140, heapSize ~17.11 KB/17520, currentSize=14.71 KB/15064 for 5ff2c972fd01aea5476201df2f7474d2 in 23ms, sequenceid=313, compaction requested=true 2023-05-31 08:03:02,139 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:02,139 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-31 08:03:02,139 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-31 08:03:02,140 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 208767 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-31 08:03:02,140 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1912): 5ff2c972fd01aea5476201df2f7474d2/info is initiating minor compaction (all files) 2023-05-31 08:03:02,140 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 5ff2c972fd01aea5476201df2f7474d2/info in TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:02,140 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/028a55db51ec4c9684c4c3263f4edb77, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a] into tmpdir=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp, totalSize=203.9 K 2023-05-31 08:03:02,141 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 028a55db51ec4c9684c4c3263f4edb77, keycount=154, bloomtype=ROW, size=168.0 K, encoding=NONE, compression=NONE, seqNum=281, earliestPutTs=1685520129590 2023-05-31 08:03:02,141 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 6bdfc2d353e44f4a98c31c77278df7a4, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=295, earliestPutTs=1685520180072 2023-05-31 08:03:02,141 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] compactions.Compactor(207): Compacting 1309aade26a747c986174d96ccc82d7a, keycount=15, bloomtype=ROW, size=20.6 K, encoding=NONE, compression=NONE, seqNum=313, earliestPutTs=1685520182088 2023-05-31 08:03:02,152 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] throttle.PressureAwareThroughputController(145): 5ff2c972fd01aea5476201df2f7474d2#info#compaction#57 average throughput is 91.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-31 08:03:02,167 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/b6e35e3297de453b8e723df475874736 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/b6e35e3297de453b8e723df475874736 2023-05-31 08:03:02,173 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 5ff2c972fd01aea5476201df2f7474d2/info of 5ff2c972fd01aea5476201df2f7474d2 into b6e35e3297de453b8e723df475874736(size=194.5 K), total size for store is 194.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-31 08:03:02,173 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:02,173 INFO [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., storeName=5ff2c972fd01aea5476201df2f7474d2/info, priority=13, startTime=1685520182139; duration=0sec 2023-05-31 08:03:02,173 DEBUG [RS:0;jenkins-hbase16:42933-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-31 08:03:12,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42933] regionserver.HRegion(9158): Flush requested on 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:12,210 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=15.76 KB heapSize=17.13 KB 2023-05-31 08:03:12,228 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.76 KB at sequenceid=332 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/25b22f50f67d4fcfb2f54e73791941b3 2023-05-31 08:03:12,234 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/25b22f50f67d4fcfb2f54e73791941b3 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/25b22f50f67d4fcfb2f54e73791941b3 2023-05-31 08:03:12,239 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/25b22f50f67d4fcfb2f54e73791941b3, entries=15, sequenceid=332, filesize=20.6 K 2023-05-31 08:03:12,240 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~15.76 KB/16140, heapSize ~17.11 KB/17520, currentSize=1.05 KB/1076 for 5ff2c972fd01aea5476201df2f7474d2 in 30ms, sequenceid=332, compaction requested=false 2023-05-31 08:03:12,240 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:14,213 INFO [Listener at localhost.localdomain/39789] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-31 08:03:14,248 INFO [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520116492 with entries=316, filesize=309.16 KB; new WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194214 2023-05-31 08:03:14,249 DEBUG [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45257,DS-2411467d-81bb-4690-9da8-12d424e3eea8,DISK], DatanodeInfoWithStorage[127.0.0.1:45941,DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505,DISK]] 2023-05-31 08:03:14,249 DEBUG [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520116492 is not closed yet, will try archiving it next time 2023-05-31 08:03:14,259 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegion(2446): Flush status journal for cc28087a9523cd4b693a3e9f4a7a33a8: 2023-05-31 08:03:14,259 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2745): Flushing 8f30553b0d7dd52eeef80e45f1556424 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 08:03:14,269 INFO [Listener at localhost.localdomain/39789] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/.tmp/info/59fb5bf31ef946e39c2e3b54547e333e 2023-05-31 08:03:14,275 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/.tmp/info/59fb5bf31ef946e39c2e3b54547e333e as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/info/59fb5bf31ef946e39c2e3b54547e333e 2023-05-31 08:03:14,281 INFO [Listener at localhost.localdomain/39789] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/info/59fb5bf31ef946e39c2e3b54547e333e, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 08:03:14,282 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 8f30553b0d7dd52eeef80e45f1556424 in 23ms, sequenceid=6, compaction requested=false 2023-05-31 08:03:14,283 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegion(2446): Flush status journal for 8f30553b0d7dd52eeef80e45f1556424: 2023-05-31 08:03:14,283 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2745): Flushing 5ff2c972fd01aea5476201df2f7474d2 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-31 08:03:14,292 INFO [Listener at localhost.localdomain/39789] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=336 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/9e7f70a885774ac3abd6be41909d0307 2023-05-31 08:03:14,300 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/.tmp/info/9e7f70a885774ac3abd6be41909d0307 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/9e7f70a885774ac3abd6be41909d0307 2023-05-31 08:03:14,306 INFO [Listener at localhost.localdomain/39789] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/9e7f70a885774ac3abd6be41909d0307, entries=1, sequenceid=336, filesize=5.8 K 2023-05-31 08:03:14,307 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 5ff2c972fd01aea5476201df2f7474d2 in 24ms, sequenceid=336, compaction requested=true 2023-05-31 08:03:14,308 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegion(2446): Flush status journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:14,308 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-31 08:03:14,318 INFO [Listener at localhost.localdomain/39789] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/info/4b618e364b9c4337878c3b92e49d1bda 2023-05-31 08:03:14,326 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/.tmp/info/4b618e364b9c4337878c3b92e49d1bda as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info/4b618e364b9c4337878c3b92e49d1bda 2023-05-31 08:03:14,332 INFO [Listener at localhost.localdomain/39789] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/info/4b618e364b9c4337878c3b92e49d1bda, entries=16, sequenceid=24, filesize=7.0 K 2023-05-31 08:03:14,333 INFO [Listener at localhost.localdomain/39789] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 25ms, sequenceid=24, compaction requested=false 2023-05-31 08:03:14,333 DEBUG [Listener at localhost.localdomain/39789] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-31 08:03:14,339 INFO [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194214 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194333 2023-05-31 08:03:14,339 DEBUG [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45257,DS-2411467d-81bb-4690-9da8-12d424e3eea8,DISK], DatanodeInfoWithStorage[127.0.0.1:45941,DS-a5459f5b-61cf-4adb-b7ae-92c9adda9505,DISK]] 2023-05-31 08:03:14,340 DEBUG [Listener at localhost.localdomain/39789] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194214 is not closed yet, will try archiving it next time 2023-05-31 08:03:14,340 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520116492 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520116492 2023-05-31 08:03:14,341 INFO [Listener at localhost.localdomain/39789] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-31 08:03:14,344 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194214 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs/jenkins-hbase16.apache.org%2C42933%2C1685520115967.1685520194214 2023-05-31 08:03:14,442 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 08:03:14,442 INFO [Listener at localhost.localdomain/39789] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-31 08:03:14,442 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x043f18b4 to 127.0.0.1:61400 2023-05-31 08:03:14,442 DEBUG [Listener at localhost.localdomain/39789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:14,442 DEBUG [Listener at localhost.localdomain/39789] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 08:03:14,443 DEBUG [Listener at localhost.localdomain/39789] util.JVMClusterUtil(257): Found active master hash=1230123972, stopped=false 2023-05-31 08:03:14,443 INFO [Listener at localhost.localdomain/39789] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:03:14,486 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:14,487 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:14,487 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:14,487 INFO [Listener at localhost.localdomain/39789] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 08:03:14,487 DEBUG [Listener at localhost.localdomain/39789] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3eee5691 to 127.0.0.1:61400 2023-05-31 08:03:14,490 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:14,490 DEBUG [Listener at localhost.localdomain/39789] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:14,490 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:14,490 INFO [Listener at localhost.localdomain/39789] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,42933,1685520115967' ***** 2023-05-31 08:03:14,491 INFO [Listener at localhost.localdomain/39789] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 08:03:14,492 INFO [RS:0;jenkins-hbase16:42933] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 08:03:14,492 INFO [RS:0;jenkins-hbase16:42933] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 08:03:14,492 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 08:03:14,492 INFO [RS:0;jenkins-hbase16:42933] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 08:03:14,492 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(3303): Received CLOSE for cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:03:14,493 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(3303): Received CLOSE for 8f30553b0d7dd52eeef80e45f1556424 2023-05-31 08:03:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing cc28087a9523cd4b693a3e9f4a7a33a8, disabling compactions & flushes 2023-05-31 08:03:14,493 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(3303): Received CLOSE for 5ff2c972fd01aea5476201df2f7474d2 2023-05-31 08:03:14,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:03:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:03:14,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. after waiting 0 ms 2023-05-31 08:03:14,493 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:03:14,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:03:14,494 DEBUG [RS:0;jenkins-hbase16:42933] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4d13788e to 127.0.0.1:61400 2023-05-31 08:03:14,494 DEBUG [RS:0;jenkins-hbase16:42933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:14,494 INFO [RS:0;jenkins-hbase16:42933] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 08:03:14,494 INFO [RS:0;jenkins-hbase16:42933] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 08:03:14,494 INFO [RS:0;jenkins-hbase16:42933] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 08:03:14,494 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 08:03:14,497 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-31 08:03:14,497 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-bottom] to archive 2023-05-31 08:03:14,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:03:14,497 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1478): Online Regions={cc28087a9523cd4b693a3e9f4a7a33a8=TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8., 8f30553b0d7dd52eeef80e45f1556424=hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424., 5ff2c972fd01aea5476201df2f7474d2=TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 08:03:14,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:03:14,497 DEBUG [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1504): Waiting on 1588230740, 5ff2c972fd01aea5476201df2f7474d2, 8f30553b0d7dd52eeef80e45f1556424, cc28087a9523cd4b693a3e9f4a7a33a8 2023-05-31 08:03:14,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:03:14,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:03:14,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:03:14,499 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 08:03:14,503 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95 2023-05-31 08:03:14,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-31 08:03:14,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 08:03:14,509 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:03:14,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:03:14,509 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 08:03:14,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/cc28087a9523cd4b693a3e9f4a7a33a8/recovered.edits/93.seqid, newMaxSeqId=93, maxSeqId=88 2023-05-31 08:03:14,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for cc28087a9523cd4b693a3e9f4a7a33a8: 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685520139720.cc28087a9523cd4b693a3e9f4a7a33a8. 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 8f30553b0d7dd52eeef80e45f1556424, disabling compactions & flushes 2023-05-31 08:03:14,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. after waiting 0 ms 2023-05-31 08:03:14,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:03:14,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/hbase/namespace/8f30553b0d7dd52eeef80e45f1556424/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 08:03:14,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:03:14,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 8f30553b0d7dd52eeef80e45f1556424: 2023-05-31 08:03:14,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685520116688.8f30553b0d7dd52eeef80e45f1556424. 2023-05-31 08:03:14,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 5ff2c972fd01aea5476201df2f7474d2, disabling compactions & flushes 2023-05-31 08:03:14,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:14,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:14,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. after waiting 0 ms 2023-05-31 08:03:14,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:14,532 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95->hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/4a58b26d7b304bf14530805684542a95/info/5ef81493f44d4a5eb098e5d274a65ead-top, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/e1f9ec7e79a3499991cde7f912e97f96, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/32d11720b07c42e5a121aee943fdfb9d, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6a805af92bd44c4fb7265e409bd4a4d5, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/71bf532339aa4f9fa1399a05a7b242a4, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/cd37a67c6b5d4df7bf41ed2544fa4261, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3c51cd8ad238418e9d67aca6e20d57dc, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/028a55db51ec4c9684c4c3263f4edb77, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a] to archive 2023-05-31 08:03:14,533 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-31 08:03:14,535 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/5ef81493f44d4a5eb098e5d274a65ead.4a58b26d7b304bf14530805684542a95 2023-05-31 08:03:14,536 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/e1f9ec7e79a3499991cde7f912e97f96 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/e1f9ec7e79a3499991cde7f912e97f96 2023-05-31 08:03:14,537 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/TestLogRolling-testLogRolling=4a58b26d7b304bf14530805684542a95-0fe79b971bb64222b72563e1065df284 2023-05-31 08:03:14,538 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/efa446e1aaa24fddb5e1427e37b6f564 2023-05-31 08:03:14,539 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/32d11720b07c42e5a121aee943fdfb9d to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/32d11720b07c42e5a121aee943fdfb9d 2023-05-31 08:03:14,540 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/15b92107552c425c901a13e4f9f8a4f2 2023-05-31 08:03:14,541 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/f9a612bcd878430cad9654769b338636 2023-05-31 08:03:14,542 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6a805af92bd44c4fb7265e409bd4a4d5 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6a805af92bd44c4fb7265e409bd4a4d5 2023-05-31 08:03:14,543 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/a58e68552c0142508a83f3def80d76f6 2023-05-31 08:03:14,544 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3de289c76a4c49f091c217e9ad2b3697 2023-05-31 08:03:14,545 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/71bf532339aa4f9fa1399a05a7b242a4 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/71bf532339aa4f9fa1399a05a7b242a4 2023-05-31 08:03:14,546 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/ad1e1b51773a42c28a72d9117909448c 2023-05-31 08:03:14,547 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/889da8520ee6488b87b8e68e51b21cdb 2023-05-31 08:03:14,548 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/cd37a67c6b5d4df7bf41ed2544fa4261 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/cd37a67c6b5d4df7bf41ed2544fa4261 2023-05-31 08:03:14,549 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/99c744b955cf483ead154e56d0f952f5 2023-05-31 08:03:14,550 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/7e080c63fded448f873e9cf0c4c2acb3 2023-05-31 08:03:14,551 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3c51cd8ad238418e9d67aca6e20d57dc to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/3c51cd8ad238418e9d67aca6e20d57dc 2023-05-31 08:03:14,551 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/50030127258b4f04b659d4f49f32d527 2023-05-31 08:03:14,552 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/8d7ab9189ffc45cf9548bb16b8802fbe 2023-05-31 08:03:14,553 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/028a55db51ec4c9684c4c3263f4edb77 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/028a55db51ec4c9684c4c3263f4edb77 2023-05-31 08:03:14,553 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/21da6ff9f0f642dfb593fdee399a7969 2023-05-31 08:03:14,554 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4 to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/6bdfc2d353e44f4a98c31c77278df7a4 2023-05-31 08:03:14,555 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a to hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/archive/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/info/1309aade26a747c986174d96ccc82d7a 2023-05-31 08:03:14,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/data/default/TestLogRolling-testLogRolling/5ff2c972fd01aea5476201df2f7474d2/recovered.edits/339.seqid, newMaxSeqId=339, maxSeqId=88 2023-05-31 08:03:14,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:14,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 5ff2c972fd01aea5476201df2f7474d2: 2023-05-31 08:03:14,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685520139720.5ff2c972fd01aea5476201df2f7474d2. 2023-05-31 08:03:14,697 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,42933,1685520115967; all regions closed. 2023-05-31 08:03:14,698 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:03:14,711 DEBUG [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs 2023-05-31 08:03:14,711 INFO [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C42933%2C1685520115967.meta:.meta(num 1685520116618) 2023-05-31 08:03:14,711 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/WALs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:03:14,718 DEBUG [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/oldWALs 2023-05-31 08:03:14,718 INFO [RS:0;jenkins-hbase16:42933] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C42933%2C1685520115967:(num 1685520194333) 2023-05-31 08:03:14,719 DEBUG [RS:0;jenkins-hbase16:42933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:14,719 INFO [RS:0;jenkins-hbase16:42933] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:03:14,719 INFO [RS:0;jenkins-hbase16:42933] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 08:03:14,719 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:03:14,720 INFO [RS:0;jenkins-hbase16:42933] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:42933 2023-05-31 08:03:14,728 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,42933,1685520115967 2023-05-31 08:03:14,728 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:03:14,728 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:03:14,736 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,42933,1685520115967] 2023-05-31 08:03:14,736 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,42933,1685520115967; numProcessing=1 2023-05-31 08:03:14,744 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,42933,1685520115967 already deleted, retry=false 2023-05-31 08:03:14,745 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,42933,1685520115967 expired; onlineServers=0 2023-05-31 08:03:14,745 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,42479,1685520115822' ***** 2023-05-31 08:03:14,745 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 08:03:14,746 DEBUG [M:0;jenkins-hbase16:42479] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d6dbe36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:03:14,746 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:03:14,746 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,42479,1685520115822; all regions closed. 2023-05-31 08:03:14,746 DEBUG [M:0;jenkins-hbase16:42479] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:14,746 DEBUG [M:0;jenkins-hbase16:42479] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 08:03:14,746 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 08:03:14,746 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520116241] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520116241,5,FailOnTimeoutGroup] 2023-05-31 08:03:14,746 DEBUG [M:0;jenkins-hbase16:42479] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 08:03:14,746 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520116241] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520116241,5,FailOnTimeoutGroup] 2023-05-31 08:03:14,748 INFO [M:0;jenkins-hbase16:42479] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 08:03:14,750 INFO [M:0;jenkins-hbase16:42479] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 08:03:14,750 INFO [M:0;jenkins-hbase16:42479] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 08:03:14,750 DEBUG [M:0;jenkins-hbase16:42479] master.HMaster(1512): Stopping service threads 2023-05-31 08:03:14,750 INFO [M:0;jenkins-hbase16:42479] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 08:03:14,752 ERROR [M:0;jenkins-hbase16:42479] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-31 08:03:14,752 INFO [M:0;jenkins-hbase16:42479] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 08:03:14,752 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 08:03:14,758 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 08:03:14,758 DEBUG [M:0;jenkins-hbase16:42479] zookeeper.ZKUtil(398): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 08:03:14,758 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:14,758 WARN [M:0;jenkins-hbase16:42479] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 08:03:14,758 INFO [M:0;jenkins-hbase16:42479] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 08:03:14,759 INFO [M:0;jenkins-hbase16:42479] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 08:03:14,759 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:03:14,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:03:14,759 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:14,759 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:14,759 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:03:14,759 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:14,760 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-05-31 08:03:14,771 INFO [M:0;jenkins-hbase16:42479] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d9a131ffe7dd43bf98800267eed39484 2023-05-31 08:03:14,776 INFO [M:0;jenkins-hbase16:42479] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d9a131ffe7dd43bf98800267eed39484 2023-05-31 08:03:14,778 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d9a131ffe7dd43bf98800267eed39484 as hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d9a131ffe7dd43bf98800267eed39484 2023-05-31 08:03:14,782 INFO [M:0;jenkins-hbase16:42479] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d9a131ffe7dd43bf98800267eed39484 2023-05-31 08:03:14,782 INFO [M:0;jenkins-hbase16:42479] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36683/user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d9a131ffe7dd43bf98800267eed39484, entries=18, sequenceid=160, filesize=6.9 K 2023-05-31 08:03:14,783 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=160, compaction requested=false 2023-05-31 08:03:14,785 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:14,785 DEBUG [M:0;jenkins-hbase16:42479] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:03:14,785 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/9b3c87ed-9313-b343-38d3-dec1c6727507/MasterData/WALs/jenkins-hbase16.apache.org,42479,1685520115822 2023-05-31 08:03:14,789 INFO [M:0;jenkins-hbase16:42479] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 08:03:14,789 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:03:14,789 INFO [M:0;jenkins-hbase16:42479] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:42479 2023-05-31 08:03:14,800 DEBUG [M:0;jenkins-hbase16:42479] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,42479,1685520115822 already deleted, retry=false 2023-05-31 08:03:14,837 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:14,837 INFO [RS:0;jenkins-hbase16:42933] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,42933,1685520115967; zookeeper connection closed. 2023-05-31 08:03:14,837 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): regionserver:42933-0x1008042269b0001, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:14,838 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7ae40432] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7ae40432 2023-05-31 08:03:14,838 INFO [Listener at localhost.localdomain/39789] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 08:03:14,937 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:14,937 INFO [M:0;jenkins-hbase16:42479] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,42479,1685520115822; zookeeper connection closed. 2023-05-31 08:03:14,937 DEBUG [Listener at localhost.localdomain/39789-EventThread] zookeeper.ZKWatcher(600): master:42479-0x1008042269b0000, quorum=127.0.0.1:61400, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:14,939 WARN [Listener at localhost.localdomain/39789] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:03:14,949 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:03:15,058 WARN [BP-245080937-188.40.62.62-1685520114475 heartbeating to localhost.localdomain/127.0.0.1:36683] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:03:15,058 WARN [BP-245080937-188.40.62.62-1685520114475 heartbeating to localhost.localdomain/127.0.0.1:36683] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-245080937-188.40.62.62-1685520114475 (Datanode Uuid 553bed65-4b39-43b7-8e56-51d6a4569ec0) service to localhost.localdomain/127.0.0.1:36683 2023-05-31 08:03:15,059 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/dfs/data/data3/current/BP-245080937-188.40.62.62-1685520114475] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:15,060 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/dfs/data/data4/current/BP-245080937-188.40.62.62-1685520114475] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:15,062 WARN [Listener at localhost.localdomain/39789] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:03:15,068 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:03:15,179 WARN [BP-245080937-188.40.62.62-1685520114475 heartbeating to localhost.localdomain/127.0.0.1:36683] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:03:15,179 WARN [BP-245080937-188.40.62.62-1685520114475 heartbeating to localhost.localdomain/127.0.0.1:36683] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-245080937-188.40.62.62-1685520114475 (Datanode Uuid 0ff8cb49-8536-4992-ac43-af2097cb89b6) service to localhost.localdomain/127.0.0.1:36683 2023-05-31 08:03:15,180 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/dfs/data/data1/current/BP-245080937-188.40.62.62-1685520114475] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:15,180 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/cluster_da8ed8ed-25ae-40b7-0308-5496826d9e71/dfs/data/data2/current/BP-245080937-188.40.62.62-1685520114475] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:15,196 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 08:03:15,319 INFO [Listener at localhost.localdomain/39789] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 08:03:15,347 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 08:03:15,357 INFO [Listener at localhost.localdomain/39789] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=108 (was 97) - Thread LEAK? -, OpenFileDescriptor=532 (was 502) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=31 (was 40), ProcessCount=164 (was 166), AvailableMemoryMB=8867 (was 7578) - AvailableMemoryMB LEAK? - 2023-05-31 08:03:15,365 INFO [Listener at localhost.localdomain/39789] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=108, OpenFileDescriptor=532, MaxFileDescriptor=60000, SystemLoadAverage=31, ProcessCount=164, AvailableMemoryMB=8867 2023-05-31 08:03:15,365 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/hadoop.log.dir so I do NOT create it in target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/121e5e04-9f29-ee6c-2e46-1889c026b290/hadoop.tmp.dir so I do NOT create it in target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c, deleteOnExit=true 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/test.cache.data in system properties and HBase conf 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/hadoop.tmp.dir in system properties and HBase conf 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/hadoop.log.dir in system properties and HBase conf 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-31 08:03:15,366 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-31 08:03:15,366 DEBUG [Listener at localhost.localdomain/39789] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-31 08:03:15,367 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/nfs.dump.dir in system properties and HBase conf 2023-05-31 08:03:15,368 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/java.io.tmpdir in system properties and HBase conf 2023-05-31 08:03:15,368 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-31 08:03:15,368 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-31 08:03:15,368 INFO [Listener at localhost.localdomain/39789] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-31 08:03:15,369 WARN [Listener at localhost.localdomain/39789] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:03:15,370 WARN [Listener at localhost.localdomain/39789] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:03:15,370 WARN [Listener at localhost.localdomain/39789] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:03:15,622 WARN [Listener at localhost.localdomain/39789] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:03:15,625 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:03:15,632 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/java.io.tmpdir/Jetty_localhost_localdomain_35861_hdfs____.cd2h91/webapp 2023-05-31 08:03:15,702 INFO [Listener at localhost.localdomain/39789] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:35861 2023-05-31 08:03:15,703 WARN [Listener at localhost.localdomain/39789] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-31 08:03:15,704 WARN [Listener at localhost.localdomain/39789] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-31 08:03:15,704 WARN [Listener at localhost.localdomain/39789] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-31 08:03:15,866 WARN [Listener at localhost.localdomain/33357] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:03:15,875 WARN [Listener at localhost.localdomain/33357] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:03:15,877 WARN [Listener at localhost.localdomain/33357] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:03:15,878 INFO [Listener at localhost.localdomain/33357] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:03:15,883 INFO [Listener at localhost.localdomain/33357] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/java.io.tmpdir/Jetty_localhost_40387_datanode____.cx2d2b/webapp 2023-05-31 08:03:15,956 INFO [Listener at localhost.localdomain/33357] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40387 2023-05-31 08:03:15,965 WARN [Listener at localhost.localdomain/34347] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:03:15,981 WARN [Listener at localhost.localdomain/34347] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-31 08:03:15,983 WARN [Listener at localhost.localdomain/34347] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-31 08:03:15,984 INFO [Listener at localhost.localdomain/34347] log.Slf4jLog(67): jetty-6.1.26 2023-05-31 08:03:15,987 INFO [Listener at localhost.localdomain/34347] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/java.io.tmpdir/Jetty_localhost_40119_datanode____.dyknlo/webapp 2023-05-31 08:03:16,057 INFO [Listener at localhost.localdomain/34347] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40119 2023-05-31 08:03:16,063 WARN [Listener at localhost.localdomain/44459] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-31 08:03:16,370 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:03:16,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x165ed5d106e0a761: Processing first storage report for DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3 from datanode 2ce06846-2052-4aff-a7c2-98d39493791c 2023-05-31 08:03:16,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x165ed5d106e0a761: from storage DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3 node DatanodeRegistration(127.0.0.1:42563, datanodeUuid=2ce06846-2052-4aff-a7c2-98d39493791c, infoPort=43319, infoSecurePort=0, ipcPort=34347, storageInfo=lv=-57;cid=testClusterID;nsid=1254829291;c=1685520195371), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:03:16,586 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x165ed5d106e0a761: Processing first storage report for DS-3a478de8-c917-4ef5-a4a0-77daaadcecaa from datanode 2ce06846-2052-4aff-a7c2-98d39493791c 2023-05-31 08:03:16,586 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x165ed5d106e0a761: from storage DS-3a478de8-c917-4ef5-a4a0-77daaadcecaa node DatanodeRegistration(127.0.0.1:42563, datanodeUuid=2ce06846-2052-4aff-a7c2-98d39493791c, infoPort=43319, infoSecurePort=0, ipcPort=34347, storageInfo=lv=-57;cid=testClusterID;nsid=1254829291;c=1685520195371), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:03:16,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x171d7b1550cfb846: Processing first storage report for DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c from datanode 9c83853d-c223-4574-9bb9-d5d375511149 2023-05-31 08:03:16,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x171d7b1550cfb846: from storage DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c node DatanodeRegistration(127.0.0.1:37773, datanodeUuid=9c83853d-c223-4574-9bb9-d5d375511149, infoPort=40917, infoSecurePort=0, ipcPort=44459, storageInfo=lv=-57;cid=testClusterID;nsid=1254829291;c=1685520195371), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:03:16,677 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x171d7b1550cfb846: Processing first storage report for DS-9e9259b8-da5e-4ed7-89dd-807b88fa5a66 from datanode 9c83853d-c223-4574-9bb9-d5d375511149 2023-05-31 08:03:16,677 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x171d7b1550cfb846: from storage DS-9e9259b8-da5e-4ed7-89dd-807b88fa5a66 node DatanodeRegistration(127.0.0.1:37773, datanodeUuid=9c83853d-c223-4574-9bb9-d5d375511149, infoPort=40917, infoSecurePort=0, ipcPort=44459, storageInfo=lv=-57;cid=testClusterID;nsid=1254829291;c=1685520195371), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-31 08:03:16,775 DEBUG [Listener at localhost.localdomain/44459] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a 2023-05-31 08:03:16,795 INFO [Listener at localhost.localdomain/44459] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/zookeeper_0, clientPort=51908, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-31 08:03:16,796 INFO [Listener at localhost.localdomain/44459] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51908 2023-05-31 08:03:16,796 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,798 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,818 INFO [Listener at localhost.localdomain/44459] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d with version=8 2023-05-31 08:03:16,818 INFO [Listener at localhost.localdomain/44459] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43311/user/jenkins/test-data/a93941f9-ec46-90ec-27dc-4290e6df2338/hbase-staging 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:03:16,820 INFO [Listener at localhost.localdomain/44459] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:03:16,822 INFO [Listener at localhost.localdomain/44459] ipc.NettyRpcServer(120): Bind to /188.40.62.62:37819 2023-05-31 08:03:16,822 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,823 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,823 INFO [Listener at localhost.localdomain/44459] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37819 connecting to ZooKeeper ensemble=127.0.0.1:51908 2023-05-31 08:03:16,867 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:378190x0, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:03:16,869 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37819-0x100804362ff0000 connected 2023-05-31 08:03:16,932 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:03:16,934 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:16,935 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:03:16,936 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-31 08:03:16,936 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37819 2023-05-31 08:03:16,937 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37819 2023-05-31 08:03:16,937 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-31 08:03:16,938 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37819 2023-05-31 08:03:16,938 INFO [Listener at localhost.localdomain/44459] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d, hbase.cluster.distributed=false 2023-05-31 08:03:16,957 INFO [Listener at localhost.localdomain/44459] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-31 08:03:16,958 INFO [Listener at localhost.localdomain/44459] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-31 08:03:16,959 INFO [Listener at localhost.localdomain/44459] ipc.NettyRpcServer(120): Bind to /188.40.62.62:41543 2023-05-31 08:03:16,960 INFO [Listener at localhost.localdomain/44459] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-31 08:03:16,963 DEBUG [Listener at localhost.localdomain/44459] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-31 08:03:16,963 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,964 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:16,964 INFO [Listener at localhost.localdomain/44459] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41543 connecting to ZooKeeper ensemble=127.0.0.1:51908 2023-05-31 08:03:16,973 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:415430x0, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-31 08:03:16,974 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): regionserver:415430x0, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:03:16,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41543-0x100804362ff0001 connected 2023-05-31 08:03:16,975 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:16,975 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ZKUtil(164): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-31 08:03:16,975 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41543 2023-05-31 08:03:16,975 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41543 2023-05-31 08:03:16,976 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41543 2023-05-31 08:03:16,976 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41543 2023-05-31 08:03:16,976 DEBUG [Listener at localhost.localdomain/44459] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41543 2023-05-31 08:03:16,978 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:16,987 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:03:16,987 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:16,995 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:03:16,995 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:16,995 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-31 08:03:16,995 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:03:16,996 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,37819,1685520196819 from backup master directory 2023-05-31 08:03:16,996 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-31 08:03:17,006 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:17,006 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:03:17,006 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-31 08:03:17,006 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:17,016 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/hbase.id with ID: 7b39a789-0ed4-4195-ab4d-8e3b3aea9d32 2023-05-31 08:03:17,027 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:17,036 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,043 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x75facd6e to 127.0.0.1:51908 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:03:17,054 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55a9af9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:03:17,054 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-31 08:03:17,055 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-31 08:03:17,055 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:03:17,057 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store-tmp 2023-05-31 08:03:17,065 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:17,065 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:03:17,065 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:17,065 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:17,065 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:03:17,065 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:17,065 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:17,066 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:03:17,073 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/WALs/jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:17,077 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C37819%2C1685520196819, suffix=, logDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/WALs/jenkins-hbase16.apache.org,37819,1685520196819, archiveDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/oldWALs, maxLogs=10 2023-05-31 08:03:17,082 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/WALs/jenkins-hbase16.apache.org,37819,1685520196819/jenkins-hbase16.apache.org%2C37819%2C1685520196819.1685520197077 2023-05-31 08:03:17,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42563,DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3,DISK], DatanodeInfoWithStorage[127.0.0.1:37773,DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c,DISK]] 2023-05-31 08:03:17,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:03:17,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:17,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,083 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,086 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,087 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-31 08:03:17,088 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-31 08:03:17,089 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,090 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,090 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,094 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-31 08:03:17,097 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:03:17,098 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=773028, jitterRate=-0.01704472303390503}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:03:17,098 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:03:17,098 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-31 08:03:17,100 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-31 08:03:17,100 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-31 08:03:17,100 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-31 08:03:17,101 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-31 08:03:17,101 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-31 08:03:17,101 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-31 08:03:17,102 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-31 08:03:17,104 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-31 08:03:17,117 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-31 08:03:17,117 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-31 08:03:17,118 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-31 08:03:17,118 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-31 08:03:17,118 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-31 08:03:17,128 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,129 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-31 08:03:17,129 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-31 08:03:17,130 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-31 08:03:17,139 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:17,139 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:17,139 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,140 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,37819,1685520196819, sessionid=0x100804362ff0000, setting cluster-up flag (Was=false) 2023-05-31 08:03:17,156 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,181 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-31 08:03:17,183 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:17,203 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,231 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-31 08:03:17,234 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:17,235 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.hbase-snapshot/.tmp 2023-05-31 08:03:17,243 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-31 08:03:17,243 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:03:17,244 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,245 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685520227245 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-31 08:03:17,246 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,247 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-31 08:03:17,247 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:03:17,247 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-31 08:03:17,247 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-31 08:03:17,248 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520197248,5,FailOnTimeoutGroup] 2023-05-31 08:03:17,248 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520197248,5,FailOnTimeoutGroup] 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-31 08:03:17,248 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,249 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,249 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:03:17,261 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:03:17,262 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-31 08:03:17,262 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d 2023-05-31 08:03:17,273 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:17,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:03:17,275 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/info 2023-05-31 08:03:17,276 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:03:17,276 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,276 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:03:17,278 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(951): ClusterId : 7b39a789-0ed4-4195-ab4d-8e3b3aea9d32 2023-05-31 08:03:17,278 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-31 08:03:17,279 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:03:17,279 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:03:17,280 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,280 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:03:17,281 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/table 2023-05-31 08:03:17,281 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:03:17,281 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,282 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740 2023-05-31 08:03:17,282 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740 2023-05-31 08:03:17,284 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:03:17,285 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:03:17,287 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751128, jitterRate=-0.04489243030548096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:03:17,287 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:03:17,287 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:03:17,288 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:03:17,288 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:03:17,288 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-31 08:03:17,288 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-31 08:03:17,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-31 08:03:17,290 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-31 08:03:17,291 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-31 08:03:17,291 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-31 08:03:17,291 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-31 08:03:17,298 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-31 08:03:17,299 DEBUG [RS:0;jenkins-hbase16:41543] zookeeper.ReadOnlyZKClient(139): Connect 0x29c5e6e9 to 127.0.0.1:51908 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:03:17,312 DEBUG [RS:0;jenkins-hbase16:41543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27021da1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:03:17,312 DEBUG [RS:0;jenkins-hbase16:41543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@233d8d99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:03:17,320 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:41543 2023-05-31 08:03:17,320 INFO [RS:0;jenkins-hbase16:41543] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-31 08:03:17,320 INFO [RS:0;jenkins-hbase16:41543] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-31 08:03:17,320 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1022): About to register with Master. 2023-05-31 08:03:17,321 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase16.apache.org,37819,1685520196819 with isa=jenkins-hbase16.apache.org/188.40.62.62:41543, startcode=1685520196957 2023-05-31 08:03:17,321 DEBUG [RS:0;jenkins-hbase16:41543] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-31 08:03:17,324 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:42345, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-31 08:03:17,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37819] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,326 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d 2023-05-31 08:03:17,326 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33357 2023-05-31 08:03:17,326 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-31 08:03:17,337 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:03:17,337 DEBUG [RS:0;jenkins-hbase16:41543] zookeeper.ZKUtil(162): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,338 WARN [RS:0;jenkins-hbase16:41543] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-31 08:03:17,338 INFO [RS:0;jenkins-hbase16:41543] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:03:17,338 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,338 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,41543,1685520196957] 2023-05-31 08:03:17,343 DEBUG [RS:0;jenkins-hbase16:41543] zookeeper.ZKUtil(162): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,344 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-31 08:03:17,344 INFO [RS:0;jenkins-hbase16:41543] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-31 08:03:17,346 INFO [RS:0;jenkins-hbase16:41543] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-31 08:03:17,346 INFO [RS:0;jenkins-hbase16:41543] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-31 08:03:17,347 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,347 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-31 08:03:17,349 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,349 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,350 DEBUG [RS:0;jenkins-hbase16:41543] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-05-31 08:03:17,351 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,351 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,351 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,366 INFO [RS:0;jenkins-hbase16:41543] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-31 08:03:17,366 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,41543,1685520196957-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,374 INFO [RS:0;jenkins-hbase16:41543] regionserver.Replication(203): jenkins-hbase16.apache.org,41543,1685520196957 started 2023-05-31 08:03:17,374 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,41543,1685520196957, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:41543, sessionid=0x100804362ff0001 2023-05-31 08:03:17,374 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-31 08:03:17,374 DEBUG [RS:0;jenkins-hbase16:41543] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,374 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,41543,1685520196957' 2023-05-31 08:03:17,374 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-31 08:03:17,374 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,41543,1685520196957' 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-31 08:03:17,375 DEBUG [RS:0;jenkins-hbase16:41543] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-31 08:03:17,376 DEBUG [RS:0;jenkins-hbase16:41543] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-31 08:03:17,376 INFO [RS:0;jenkins-hbase16:41543] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-31 08:03:17,376 INFO [RS:0;jenkins-hbase16:41543] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-31 08:03:17,441 DEBUG [jenkins-hbase16:37819] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-31 08:03:17,442 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,41543,1685520196957, state=OPENING 2023-05-31 08:03:17,448 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-31 08:03:17,456 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:17,457 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,41543,1685520196957}] 2023-05-31 08:03:17,457 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:03:17,479 INFO [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C41543%2C1685520196957, suffix=, logDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957, archiveDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs, maxLogs=32 2023-05-31 08:03:17,493 INFO [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957/jenkins-hbase16.apache.org%2C41543%2C1685520196957.1685520197480 2023-05-31 08:03:17,493 DEBUG [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42563,DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3,DISK], DatanodeInfoWithStorage[127.0.0.1:37773,DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c,DISK]] 2023-05-31 08:03:17,612 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,612 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-31 08:03:17,617 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:38662, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-31 08:03:17,622 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-31 08:03:17,622 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:03:17,625 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C41543%2C1685520196957.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957, archiveDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs, maxLogs=32 2023-05-31 08:03:17,630 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957/jenkins-hbase16.apache.org%2C41543%2C1685520196957.meta.1685520197625.meta 2023-05-31 08:03:17,630 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42563,DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3,DISK], DatanodeInfoWithStorage[127.0.0.1:37773,DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c,DISK]] 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-31 08:03:17,631 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-31 08:03:17,631 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-31 08:03:17,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-31 08:03:17,633 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/info 2023-05-31 08:03:17,633 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/info 2023-05-31 08:03:17,633 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-31 08:03:17,634 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,634 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-31 08:03:17,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:03:17,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/rep_barrier 2023-05-31 08:03:17,635 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-31 08:03:17,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-31 08:03:17,636 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/table 2023-05-31 08:03:17,636 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/table 2023-05-31 08:03:17,636 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-31 08:03:17,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:17,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740 2023-05-31 08:03:17,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740 2023-05-31 08:03:17,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-31 08:03:17,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-31 08:03:17,641 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=858427, jitterRate=0.09154750406742096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-31 08:03:17,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-31 08:03:17,644 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685520197612 2023-05-31 08:03:17,648 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-31 08:03:17,648 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-31 08:03:17,649 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,41543,1685520196957, state=OPEN 2023-05-31 08:03:17,656 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-31 08:03:17,656 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-31 08:03:17,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-31 08:03:17,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,41543,1685520196957 in 199 msec 2023-05-31 08:03:17,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-31 08:03:17,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-05-31 08:03:17,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 422 msec 2023-05-31 08:03:17,661 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685520197661, completionTime=-1 2023-05-31 08:03:17,661 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-31 08:03:17,661 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-31 08:03:17,663 DEBUG [hconnection-0x42f8cc88-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:03:17,665 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:38674, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:03:17,666 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-31 08:03:17,666 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685520257666 2023-05-31 08:03:17,666 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685520317666 2023-05-31 08:03:17,666 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-05-31 08:03:17,687 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37819,1685520196819-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,687 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37819,1685520196819-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,687 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37819,1685520196819-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,687 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:37819, period=300000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-31 08:03:17,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-31 08:03:17,688 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-31 08:03:17,689 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-31 08:03:17,689 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-31 08:03:17,690 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-31 08:03:17,692 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-31 08:03:17,693 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.tmp/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:17,694 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.tmp/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116 empty. 2023-05-31 08:03:17,695 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.tmp/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:17,695 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-31 08:03:17,704 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-31 08:03:17,705 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e7b898c433b8bc708d0d5ac028c43116, NAME => 'hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/.tmp 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e7b898c433b8bc708d0d5ac028c43116, disabling compactions & flushes 2023-05-31 08:03:17,711 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. after waiting 0 ms 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:17,711 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:17,711 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e7b898c433b8bc708d0d5ac028c43116: 2023-05-31 08:03:17,713 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-31 08:03:17,714 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520197714"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685520197714"}]},"ts":"1685520197714"} 2023-05-31 08:03:17,716 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-31 08:03:17,717 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-31 08:03:17,717 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520197717"}]},"ts":"1685520197717"} 2023-05-31 08:03:17,718 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-31 08:03:17,756 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e7b898c433b8bc708d0d5ac028c43116, ASSIGN}] 2023-05-31 08:03:17,760 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e7b898c433b8bc708d0d5ac028c43116, ASSIGN 2023-05-31 08:03:17,761 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e7b898c433b8bc708d0d5ac028c43116, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,41543,1685520196957; forceNewPlan=false, retain=false 2023-05-31 08:03:17,913 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e7b898c433b8bc708d0d5ac028c43116, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:17,914 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520197913"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1685520197913"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685520197913"}]},"ts":"1685520197913"} 2023-05-31 08:03:17,918 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e7b898c433b8bc708d0d5ac028c43116, server=jenkins-hbase16.apache.org,41543,1685520196957}] 2023-05-31 08:03:18,084 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7b898c433b8bc708d0d5ac028c43116, NAME => 'hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.', STARTKEY => '', ENDKEY => ''} 2023-05-31 08:03:18,085 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,085 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-31 08:03:18,085 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,088 INFO [StoreOpener-e7b898c433b8bc708d0d5ac028c43116-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,090 DEBUG [StoreOpener-e7b898c433b8bc708d0d5ac028c43116-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/info 2023-05-31 08:03:18,090 DEBUG [StoreOpener-e7b898c433b8bc708d0d5ac028c43116-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/info 2023-05-31 08:03:18,090 INFO [StoreOpener-e7b898c433b8bc708d0d5ac028c43116-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7b898c433b8bc708d0d5ac028c43116 columnFamilyName info 2023-05-31 08:03:18,090 INFO [StoreOpener-e7b898c433b8bc708d0d5ac028c43116-1] regionserver.HStore(310): Store=e7b898c433b8bc708d0d5ac028c43116/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-31 08:03:18,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,091 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,094 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-31 08:03:18,095 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened e7b898c433b8bc708d0d5ac028c43116; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=811122, jitterRate=0.0313958078622818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-31 08:03:18,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for e7b898c433b8bc708d0d5ac028c43116: 2023-05-31 08:03:18,097 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116., pid=6, masterSystemTime=1685520198074 2023-05-31 08:03:18,099 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,099 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,099 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e7b898c433b8bc708d0d5ac028c43116, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:18,100 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685520198099"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1685520198099"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685520198099"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685520198099"}]},"ts":"1685520198099"} 2023-05-31 08:03:18,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-31 08:03:18,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e7b898c433b8bc708d0d5ac028c43116, server=jenkins-hbase16.apache.org,41543,1685520196957 in 183 msec 2023-05-31 08:03:18,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-31 08:03:18,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e7b898c433b8bc708d0d5ac028c43116, ASSIGN in 349 msec 2023-05-31 08:03:18,106 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-31 08:03:18,106 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685520198106"}]},"ts":"1685520198106"} 2023-05-31 08:03:18,108 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-31 08:03:18,115 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-31 08:03:18,118 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 428 msec 2023-05-31 08:03:18,191 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-31 08:03:18,198 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:03:18,198 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:18,207 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-31 08:03:18,223 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:03:18,236 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 28 msec 2023-05-31 08:03:18,241 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-31 08:03:18,261 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-31 08:03:18,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 35 msec 2023-05-31 08:03:18,303 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-31 08:03:18,323 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-31 08:03:18,323 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.317sec 2023-05-31 08:03:18,323 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-31 08:03:18,323 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-31 08:03:18,323 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-31 08:03:18,323 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37819,1685520196819-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-31 08:03:18,324 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37819,1685520196819-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-31 08:03:18,328 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-31 08:03:18,379 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ReadOnlyZKClient(139): Connect 0x5195e69d to 127.0.0.1:51908 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-31 08:03:18,395 DEBUG [Listener at localhost.localdomain/44459] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55fa09fd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-31 08:03:18,397 DEBUG [hconnection-0x5717f337-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-31 08:03:18,398 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:43060, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-31 08:03:18,400 INFO [Listener at localhost.localdomain/44459] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:18,400 INFO [Listener at localhost.localdomain/44459] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-31 08:03:18,414 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-31 08:03:18,414 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:18,415 INFO [Listener at localhost.localdomain/44459] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-31 08:03:18,416 INFO [Listener at localhost.localdomain/44459] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-31 08:03:18,418 INFO [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs, maxLogs=32 2023-05-31 08:03:18,424 INFO [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1/test.com%2C8080%2C1.1685520198418 2023-05-31 08:03:18,425 DEBUG [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42563,DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3,DISK], DatanodeInfoWithStorage[127.0.0.1:37773,DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c,DISK]] 2023-05-31 08:03:18,435 INFO [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1/test.com%2C8080%2C1.1685520198418 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1/test.com%2C8080%2C1.1685520198425 2023-05-31 08:03:18,435 DEBUG [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42563,DS-4adf29c8-da52-4e95-90e3-4f8c6236f8b3,DISK], DatanodeInfoWithStorage[127.0.0.1:37773,DS-09ce0a32-d143-4ecc-9a8d-ace22f90024c,DISK]] 2023-05-31 08:03:18,435 DEBUG [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1/test.com%2C8080%2C1.1685520198418 is not closed yet, will try archiving it next time 2023-05-31 08:03:18,436 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1 2023-05-31 08:03:18,447 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/test.com,8080,1/test.com%2C8080%2C1.1685520198418 to hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs/test.com%2C8080%2C1.1685520198418 2023-05-31 08:03:18,448 DEBUG [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs 2023-05-31 08:03:18,449 INFO [Listener at localhost.localdomain/44459] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685520198425) 2023-05-31 08:03:18,449 INFO [Listener at localhost.localdomain/44459] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-31 08:03:18,449 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5195e69d to 127.0.0.1:51908 2023-05-31 08:03:18,449 DEBUG [Listener at localhost.localdomain/44459] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:18,450 DEBUG [Listener at localhost.localdomain/44459] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-31 08:03:18,450 DEBUG [Listener at localhost.localdomain/44459] util.JVMClusterUtil(257): Found active master hash=639453562, stopped=false 2023-05-31 08:03:18,450 INFO [Listener at localhost.localdomain/44459] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:18,465 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:18,465 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-31 08:03:18,465 INFO [Listener at localhost.localdomain/44459] procedure2.ProcedureExecutor(629): Stopping 2023-05-31 08:03:18,465 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:18,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:18,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-31 08:03:18,466 DEBUG [Listener at localhost.localdomain/44459] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75facd6e to 127.0.0.1:51908 2023-05-31 08:03:18,466 DEBUG [Listener at localhost.localdomain/44459] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:18,466 INFO [Listener at localhost.localdomain/44459] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,41543,1685520196957' ***** 2023-05-31 08:03:18,466 INFO [Listener at localhost.localdomain/44459] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-31 08:03:18,467 INFO [RS:0;jenkins-hbase16:41543] regionserver.HeapMemoryManager(220): Stopping 2023-05-31 08:03:18,467 INFO [RS:0;jenkins-hbase16:41543] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-31 08:03:18,467 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-31 08:03:18,467 INFO [RS:0;jenkins-hbase16:41543] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-31 08:03:18,467 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(3303): Received CLOSE for e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,467 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:18,468 DEBUG [RS:0;jenkins-hbase16:41543] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29c5e6e9 to 127.0.0.1:51908 2023-05-31 08:03:18,468 DEBUG [RS:0;jenkins-hbase16:41543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:18,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing e7b898c433b8bc708d0d5ac028c43116, disabling compactions & flushes 2023-05-31 08:03:18,468 INFO [RS:0;jenkins-hbase16:41543] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-31 08:03:18,468 INFO [RS:0;jenkins-hbase16:41543] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-31 08:03:18,468 INFO [RS:0;jenkins-hbase16:41543] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-31 08:03:18,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,468 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-31 08:03:18,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. after waiting 0 ms 2023-05-31 08:03:18,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,468 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-31 08:03:18,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing e7b898c433b8bc708d0d5ac028c43116 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-31 08:03:18,469 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1478): Online Regions={e7b898c433b8bc708d0d5ac028c43116=hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116., 1588230740=hbase:meta,,1.1588230740} 2023-05-31 08:03:18,469 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-31 08:03:18,469 DEBUG [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1504): Waiting on 1588230740, e7b898c433b8bc708d0d5ac028c43116 2023-05-31 08:03:18,469 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-31 08:03:18,469 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-31 08:03:18,469 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-31 08:03:18,469 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-31 08:03:18,469 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-31 08:03:18,481 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/.tmp/info/9f763039157344608d509ab8409b6473 2023-05-31 08:03:18,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/.tmp/info/58096002c127466fa0c2c0069b1acd1d 2023-05-31 08:03:18,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/.tmp/info/58096002c127466fa0c2c0069b1acd1d as hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/info/58096002c127466fa0c2c0069b1acd1d 2023-05-31 08:03:18,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/info/58096002c127466fa0c2c0069b1acd1d, entries=2, sequenceid=6, filesize=4.8 K 2023-05-31 08:03:18,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e7b898c433b8bc708d0d5ac028c43116 in 27ms, sequenceid=6, compaction requested=false 2023-05-31 08:03:18,498 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/.tmp/table/0e503aead6c54d25acad8efe9c598e78 2023-05-31 08:03:18,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/namespace/e7b898c433b8bc708d0d5ac028c43116/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-31 08:03:18,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for e7b898c433b8bc708d0d5ac028c43116: 2023-05-31 08:03:18,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685520197688.e7b898c433b8bc708d0d5ac028c43116. 2023-05-31 08:03:18,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/.tmp/info/9f763039157344608d509ab8409b6473 as hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/info/9f763039157344608d509ab8409b6473 2023-05-31 08:03:18,506 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/info/9f763039157344608d509ab8409b6473, entries=10, sequenceid=9, filesize=5.9 K 2023-05-31 08:03:18,507 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/.tmp/table/0e503aead6c54d25acad8efe9c598e78 as hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/table/0e503aead6c54d25acad8efe9c598e78 2023-05-31 08:03:18,511 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/table/0e503aead6c54d25acad8efe9c598e78, entries=2, sequenceid=9, filesize=4.7 K 2023-05-31 08:03:18,511 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 42ms, sequenceid=9, compaction requested=false 2023-05-31 08:03:18,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-31 08:03:18,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-31 08:03:18,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-31 08:03:18,517 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-31 08:03:18,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-31 08:03:18,669 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,41543,1685520196957; all regions closed. 2023-05-31 08:03:18,670 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:18,682 DEBUG [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs 2023-05-31 08:03:18,682 INFO [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C41543%2C1685520196957.meta:.meta(num 1685520197625) 2023-05-31 08:03:18,682 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/WALs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:18,687 DEBUG [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/oldWALs 2023-05-31 08:03:18,687 INFO [RS:0;jenkins-hbase16:41543] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase16.apache.org%2C41543%2C1685520196957:(num 1685520197480) 2023-05-31 08:03:18,687 DEBUG [RS:0;jenkins-hbase16:41543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:18,687 INFO [RS:0;jenkins-hbase16:41543] regionserver.LeaseManager(133): Closed leases 2023-05-31 08:03:18,688 INFO [RS:0;jenkins-hbase16:41543] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-31 08:03:18,688 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:03:18,689 INFO [RS:0;jenkins-hbase16:41543] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:41543 2023-05-31 08:03:18,698 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:03:18,698 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,41543,1685520196957 2023-05-31 08:03:18,698 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-31 08:03:18,699 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,41543,1685520196957] 2023-05-31 08:03:18,699 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,41543,1685520196957; numProcessing=1 2023-05-31 08:03:18,714 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,41543,1685520196957 already deleted, retry=false 2023-05-31 08:03:18,715 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,41543,1685520196957 expired; onlineServers=0 2023-05-31 08:03:18,715 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase16.apache.org,37819,1685520196819' ***** 2023-05-31 08:03:18,715 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-31 08:03:18,716 DEBUG [M:0;jenkins-hbase16:37819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2222d961, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-05-31 08:03:18,716 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:18,716 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,37819,1685520196819; all regions closed. 2023-05-31 08:03:18,716 DEBUG [M:0;jenkins-hbase16:37819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-31 08:03:18,716 DEBUG [M:0;jenkins-hbase16:37819] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-31 08:03:18,716 DEBUG [M:0;jenkins-hbase16:37819] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-31 08:03:18,716 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520197248] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1685520197248,5,FailOnTimeoutGroup] 2023-05-31 08:03:18,717 INFO [M:0;jenkins-hbase16:37819] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-31 08:03:18,716 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520197248] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1685520197248,5,FailOnTimeoutGroup] 2023-05-31 08:03:18,716 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-31 08:03:18,718 INFO [M:0;jenkins-hbase16:37819] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-31 08:03:18,719 INFO [M:0;jenkins-hbase16:37819] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-05-31 08:03:18,719 DEBUG [M:0;jenkins-hbase16:37819] master.HMaster(1512): Stopping service threads 2023-05-31 08:03:18,719 INFO [M:0;jenkins-hbase16:37819] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-31 08:03:18,719 ERROR [M:0;jenkins-hbase16:37819] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-31 08:03:18,721 INFO [M:0;jenkins-hbase16:37819] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-31 08:03:18,721 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-31 08:03:18,728 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-31 08:03:18,728 DEBUG [M:0;jenkins-hbase16:37819] zookeeper.ZKUtil(398): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-31 08:03:18,728 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-31 08:03:18,728 WARN [M:0;jenkins-hbase16:37819] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-31 08:03:18,729 INFO [M:0;jenkins-hbase16:37819] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-31 08:03:18,729 INFO [M:0;jenkins-hbase16:37819] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-31 08:03:18,730 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-31 08:03:18,730 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-31 08:03:18,731 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:18,731 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:18,731 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-31 08:03:18,731 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:18,731 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-05-31 08:03:18,742 INFO [M:0;jenkins-hbase16:37819] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d27d340189064749a33e27881c005002 2023-05-31 08:03:18,748 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d27d340189064749a33e27881c005002 as hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d27d340189064749a33e27881c005002 2023-05-31 08:03:18,752 INFO [M:0;jenkins-hbase16:37819] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33357/user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d27d340189064749a33e27881c005002, entries=8, sequenceid=66, filesize=6.3 K 2023-05-31 08:03:18,753 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=66, compaction requested=false 2023-05-31 08:03:18,754 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-31 08:03:18,754 DEBUG [M:0;jenkins-hbase16:37819] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-31 08:03:18,754 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/3c155ba4-4353-2405-9381-4de1ce8e115d/MasterData/WALs/jenkins-hbase16.apache.org,37819,1685520196819 2023-05-31 08:03:18,758 INFO [M:0;jenkins-hbase16:37819] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-31 08:03:18,758 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-31 08:03:18,758 INFO [M:0;jenkins-hbase16:37819] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:37819 2023-05-31 08:03:18,764 DEBUG [M:0;jenkins-hbase16:37819] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,37819,1685520196819 already deleted, retry=false 2023-05-31 08:03:18,858 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:18,858 INFO [RS:0;jenkins-hbase16:41543] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,41543,1685520196957; zookeeper connection closed. 2023-05-31 08:03:18,858 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): regionserver:41543-0x100804362ff0001, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:18,859 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2306b549] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2306b549 2023-05-31 08:03:18,860 INFO [Listener at localhost.localdomain/44459] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-31 08:03:18,958 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:18,958 INFO [M:0;jenkins-hbase16:37819] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,37819,1685520196819; zookeeper connection closed. 2023-05-31 08:03:18,958 DEBUG [Listener at localhost.localdomain/44459-EventThread] zookeeper.ZKWatcher(600): master:37819-0x100804362ff0000, quorum=127.0.0.1:51908, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-31 08:03:18,959 WARN [Listener at localhost.localdomain/44459] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:03:18,964 INFO [Listener at localhost.localdomain/44459] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:03:19,072 WARN [BP-1692730248-188.40.62.62-1685520195371 heartbeating to localhost.localdomain/127.0.0.1:33357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:03:19,072 WARN [BP-1692730248-188.40.62.62-1685520195371 heartbeating to localhost.localdomain/127.0.0.1:33357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1692730248-188.40.62.62-1685520195371 (Datanode Uuid 9c83853d-c223-4574-9bb9-d5d375511149) service to localhost.localdomain/127.0.0.1:33357 2023-05-31 08:03:19,073 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/dfs/data/data3/current/BP-1692730248-188.40.62.62-1685520195371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:19,074 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/dfs/data/data4/current/BP-1692730248-188.40.62.62-1685520195371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:19,076 WARN [Listener at localhost.localdomain/44459] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-31 08:03:19,080 INFO [Listener at localhost.localdomain/44459] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-31 08:03:19,184 WARN [BP-1692730248-188.40.62.62-1685520195371 heartbeating to localhost.localdomain/127.0.0.1:33357] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-31 08:03:19,184 WARN [BP-1692730248-188.40.62.62-1685520195371 heartbeating to localhost.localdomain/127.0.0.1:33357] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1692730248-188.40.62.62-1685520195371 (Datanode Uuid 2ce06846-2052-4aff-a7c2-98d39493791c) service to localhost.localdomain/127.0.0.1:33357 2023-05-31 08:03:19,186 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/dfs/data/data1/current/BP-1692730248-188.40.62.62-1685520195371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:19,187 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/89ee261a-f97d-1293-3172-84c4e6b0ce2a/cluster_235746cd-f806-9db5-a311-16de167a1e4c/dfs/data/data2/current/BP-1692730248-188.40.62.62-1685520195371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-31 08:03:19,202 INFO [Listener at localhost.localdomain/44459] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-31 08:03:19,315 INFO [Listener at localhost.localdomain/44459] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-31 08:03:19,325 INFO [Listener at localhost.localdomain/44459] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-31 08:03:19,335 INFO [Listener at localhost.localdomain/44459] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=130 (was 108) - Thread LEAK? -, OpenFileDescriptor=559 (was 532) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=37 (was 31) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 164) - ProcessCount LEAK? -, AvailableMemoryMB=8857 (was 8867)