2023-05-29 09:55:29,333 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923 2023-05-29 09:55:29,347 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-29 09:55:29,379 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=258, ProcessCount=172, AvailableMemoryMB=4556 2023-05-29 09:55:29,386 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 09:55:29,386 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde, deleteOnExit=true 2023-05-29 09:55:29,386 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 09:55:29,387 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/test.cache.data in system properties and HBase conf 2023-05-29 09:55:29,387 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 09:55:29,388 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/hadoop.log.dir in system properties and HBase conf 2023-05-29 09:55:29,388 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 09:55:29,389 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 09:55:29,389 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 09:55:29,505 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-29 09:55:29,911 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 09:55:29,914 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:55:29,915 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:55:29,915 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 09:55:29,915 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:55:29,916 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 09:55:29,916 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 09:55:29,916 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:55:29,917 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:55:29,917 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 09:55:29,918 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/nfs.dump.dir in system properties and HBase conf 2023-05-29 09:55:29,918 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/java.io.tmpdir in system properties and HBase conf 2023-05-29 09:55:29,918 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:55:29,919 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 09:55:29,919 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 09:55:30,389 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:55:30,403 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:55:30,407 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:55:30,680 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-29 09:55:30,848 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-29 09:55:30,862 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:55:30,898 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:55:30,931 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/java.io.tmpdir/Jetty_localhost_44177_hdfs____4h2jnf/webapp 2023-05-29 09:55:31,074 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44177 2023-05-29 09:55:31,082 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:55:31,092 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:55:31,093 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:55:31,595 WARN [Listener at localhost/37765] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:55:31,657 WARN [Listener at localhost/37765] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:55:31,676 WARN [Listener at localhost/37765] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:55:31,683 INFO [Listener at localhost/37765] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:55:31,687 INFO [Listener at localhost/37765] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/java.io.tmpdir/Jetty_localhost_40147_datanode____.tc6zs1/webapp 2023-05-29 09:55:31,800 INFO [Listener at localhost/37765] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40147 2023-05-29 09:55:32,082 WARN [Listener at localhost/36997] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:55:32,093 WARN [Listener at localhost/36997] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:55:32,096 WARN [Listener at localhost/36997] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:55:32,097 INFO [Listener at localhost/36997] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:55:32,102 INFO [Listener at localhost/36997] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/java.io.tmpdir/Jetty_localhost_38013_datanode____.sob2rw/webapp 2023-05-29 09:55:32,198 INFO [Listener at localhost/36997] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38013 2023-05-29 09:55:32,208 WARN [Listener at localhost/35675] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:55:32,502 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc70db48829685f43: Processing first storage report for DS-b1dfcd79-244c-476b-a878-49ff1c2604a1 from datanode bfed62e7-27a5-4131-b1e6-915482b52f56 2023-05-29 09:55:32,504 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc70db48829685f43: from storage DS-b1dfcd79-244c-476b-a878-49ff1c2604a1 node DatanodeRegistration(127.0.0.1:42027, datanodeUuid=bfed62e7-27a5-4131-b1e6-915482b52f56, infoPort=35661, infoSecurePort=0, ipcPort=36997, storageInfo=lv=-57;cid=testClusterID;nsid=1832923967;c=1685354130477), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-29 09:55:32,504 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x290cf9f9eb3b3d8c: Processing first storage report for DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc from datanode d54c6923-995b-485a-b933-13994816110c 2023-05-29 09:55:32,504 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x290cf9f9eb3b3d8c: from storage DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc node DatanodeRegistration(127.0.0.1:43683, datanodeUuid=d54c6923-995b-485a-b933-13994816110c, infoPort=44575, infoSecurePort=0, ipcPort=35675, storageInfo=lv=-57;cid=testClusterID;nsid=1832923967;c=1685354130477), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:55:32,504 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc70db48829685f43: Processing first storage report for DS-2da2c16f-7796-4784-ace6-1db7f520c5aa from datanode bfed62e7-27a5-4131-b1e6-915482b52f56 2023-05-29 09:55:32,505 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc70db48829685f43: from storage DS-2da2c16f-7796-4784-ace6-1db7f520c5aa node DatanodeRegistration(127.0.0.1:42027, datanodeUuid=bfed62e7-27a5-4131-b1e6-915482b52f56, infoPort=35661, infoSecurePort=0, ipcPort=36997, storageInfo=lv=-57;cid=testClusterID;nsid=1832923967;c=1685354130477), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 09:55:32,505 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x290cf9f9eb3b3d8c: Processing first storage report for DS-b85acd71-cf11-44ea-b4c1-28da730f40a2 from datanode d54c6923-995b-485a-b933-13994816110c 2023-05-29 09:55:32,505 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x290cf9f9eb3b3d8c: from storage DS-b85acd71-cf11-44ea-b4c1-28da730f40a2 node DatanodeRegistration(127.0.0.1:43683, datanodeUuid=d54c6923-995b-485a-b933-13994816110c, infoPort=44575, infoSecurePort=0, ipcPort=35675, storageInfo=lv=-57;cid=testClusterID;nsid=1832923967;c=1685354130477), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:55:32,572 DEBUG [Listener at localhost/35675] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923 2023-05-29 09:55:32,635 INFO [Listener at localhost/35675] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/zookeeper_0, clientPort=64229, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 09:55:32,649 INFO [Listener at localhost/35675] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64229 2023-05-29 09:55:32,659 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:32,662 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:33,331 INFO [Listener at localhost/35675] util.FSUtils(471): Created version file at hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1 with version=8 2023-05-29 09:55:33,331 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 09:55:33,641 INFO [Listener at localhost/35675] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-29 09:55:34,128 INFO [Listener at localhost/35675] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:55:34,159 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,160 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,160 INFO [Listener at localhost/35675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:55:34,160 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,160 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:55:34,307 INFO [Listener at localhost/35675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:55:34,388 DEBUG [Listener at localhost/35675] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-29 09:55:34,488 INFO [Listener at localhost/35675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33941 2023-05-29 09:55:34,498 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:34,501 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:34,521 INFO [Listener at localhost/35675] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33941 connecting to ZooKeeper ensemble=127.0.0.1:64229 2023-05-29 09:55:34,559 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:339410x0, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:55:34,561 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33941-0x100765db78d0000 connected 2023-05-29 09:55:34,586 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:55:34,586 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:55:34,590 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:55:34,597 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33941 2023-05-29 09:55:34,598 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33941 2023-05-29 09:55:34,598 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33941 2023-05-29 09:55:34,599 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33941 2023-05-29 09:55:34,599 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33941 2023-05-29 09:55:34,605 INFO [Listener at localhost/35675] master.HMaster(444): hbase.rootdir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1, hbase.cluster.distributed=false 2023-05-29 09:55:34,674 INFO [Listener at localhost/35675] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:55:34,674 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,675 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,675 INFO [Listener at localhost/35675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:55:34,675 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:55:34,675 INFO [Listener at localhost/35675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:55:34,680 INFO [Listener at localhost/35675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:55:34,683 INFO [Listener at localhost/35675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34007 2023-05-29 09:55:34,685 INFO [Listener at localhost/35675] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:55:34,692 DEBUG [Listener at localhost/35675] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:55:34,693 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:34,695 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:34,696 INFO [Listener at localhost/35675] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34007 connecting to ZooKeeper ensemble=127.0.0.1:64229 2023-05-29 09:55:34,700 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:340070x0, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:55:34,701 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34007-0x100765db78d0001 connected 2023-05-29 09:55:34,701 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:55:34,702 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:55:34,703 DEBUG [Listener at localhost/35675] zookeeper.ZKUtil(164): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:55:34,704 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34007 2023-05-29 09:55:34,704 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34007 2023-05-29 09:55:34,704 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34007 2023-05-29 09:55:34,705 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34007 2023-05-29 09:55:34,705 DEBUG [Listener at localhost/35675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34007 2023-05-29 09:55:34,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:34,716 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:55:34,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:34,736 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:55:34,736 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:55:34,736 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:34,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:55:34,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33941,1685354133478 from backup master directory 2023-05-29 09:55:34,738 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:55:34,741 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:34,741 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:55:34,741 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:55:34,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:34,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-29 09:55:34,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-29 09:55:34,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase.id with ID: 405e719c-1f9f-4429-ac45-5a00b424f7b4 2023-05-29 09:55:34,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:34,885 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:34,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4ca54718 to 127.0.0.1:64229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:55:34,963 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@749d8f61, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:55:34,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:55:34,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 09:55:35,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:55:35,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store-tmp 2023-05-29 09:55:35,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:35,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:55:35,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:55:35,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:55:35,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:55:35,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:55:35,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:55:35,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:55:35,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/WALs/jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:35,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33941%2C1685354133478, suffix=, logDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/WALs/jenkins-hbase4.apache.org,33941,1685354133478, archiveDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/oldWALs, maxLogs=10 2023-05-29 09:55:35,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:55:35,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/WALs/jenkins-hbase4.apache.org,33941,1685354133478/jenkins-hbase4.apache.org%2C33941%2C1685354133478.1685354135109 2023-05-29 09:55:35,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:55:35,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:55:35,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:35,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,195 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,202 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 09:55:35,231 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 09:55:35,243 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:35,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,266 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:55:35,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:55:35,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852880, jitterRate=0.08449383080005646}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:55:35,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:55:35,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 09:55:35,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 09:55:35,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 09:55:35,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 09:55:35,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-29 09:55:35,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 36 msec 2023-05-29 09:55:35,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 09:55:35,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 09:55:35,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 09:55:35,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 09:55:35,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 09:55:35,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 09:55:35,431 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 09:55:35,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 09:55:35,439 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:35,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 09:55:35,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 09:55:35,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 09:55:35,457 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:55:35,458 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:55:35,458 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:35,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33941,1685354133478, sessionid=0x100765db78d0000, setting cluster-up flag (Was=false) 2023-05-29 09:55:35,473 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:35,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 09:55:35,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:35,491 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:35,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 09:55:35,497 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:35,500 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.hbase-snapshot/.tmp 2023-05-29 09:55:35,509 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(951): ClusterId : 405e719c-1f9f-4429-ac45-5a00b424f7b4 2023-05-29 09:55:35,513 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:55:35,518 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:55:35,518 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:55:35,521 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:55:35,522 DEBUG [RS:0;jenkins-hbase4:34007] zookeeper.ReadOnlyZKClient(139): Connect 0x1b2afcce to 127.0.0.1:64229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:55:35,526 DEBUG [RS:0;jenkins-hbase4:34007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15ed01db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:55:35,527 DEBUG [RS:0;jenkins-hbase4:34007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11d297be, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:55:35,562 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34007 2023-05-29 09:55:35,567 INFO [RS:0;jenkins-hbase4:34007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:55:35,567 INFO [RS:0;jenkins-hbase4:34007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:55:35,567 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:55:35,570 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,33941,1685354133478 with isa=jenkins-hbase4.apache.org/172.31.14.131:34007, startcode=1685354134673 2023-05-29 09:55:35,593 DEBUG [RS:0;jenkins-hbase4:34007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:55:35,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:55:35,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354165649 2023-05-29 09:55:35,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 09:55:35,655 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:55:35,655 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 09:55:35,661 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:55:35,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 09:55:35,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 09:55:35,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 09:55:35,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 09:55:35,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 09:55:35,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 09:55:35,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 09:55:35,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 09:55:35,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 09:55:35,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 09:55:35,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354135686,5,FailOnTimeoutGroup] 2023-05-29 09:55:35,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354135686,5,FailOnTimeoutGroup] 2023-05-29 09:55:35,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 09:55:35,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,705 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:55:35,707 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:55:35,707 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1 2023-05-29 09:55:35,729 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:35,732 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45943, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:55:35,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:55:35,736 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/info 2023-05-29 09:55:35,737 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:55:35,739 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:35,739 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:55:35,743 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:55:35,744 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:55:35,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:35,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:55:35,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,748 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/table 2023-05-29 09:55:35,749 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:55:35,750 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:35,752 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740 2023-05-29 09:55:35,753 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740 2023-05-29 09:55:35,757 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:55:35,759 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:55:35,763 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:55:35,764 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721310, jitterRate=-0.0828070342540741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:55:35,764 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:55:35,764 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:55:35,764 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:55:35,764 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:55:35,764 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:55:35,764 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:55:35,765 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:55:35,765 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:55:35,766 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1 2023-05-29 09:55:35,766 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37765 2023-05-29 09:55:35,766 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:55:35,771 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:55:35,771 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 09:55:35,773 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:55:35,773 DEBUG [RS:0;jenkins-hbase4:34007] zookeeper.ZKUtil(162): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,774 WARN [RS:0;jenkins-hbase4:34007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:55:35,774 INFO [RS:0;jenkins-hbase4:34007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:55:35,775 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,775 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34007,1685354134673] 2023-05-29 09:55:35,783 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 09:55:35,785 DEBUG [RS:0;jenkins-hbase4:34007] zookeeper.ZKUtil(162): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,795 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 09:55:35,796 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:55:35,797 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 09:55:35,806 INFO [RS:0;jenkins-hbase4:34007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:55:35,825 INFO [RS:0;jenkins-hbase4:34007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:55:35,829 INFO [RS:0;jenkins-hbase4:34007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:55:35,830 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,831 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:55:35,839 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,839 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,839 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,840 DEBUG [RS:0;jenkins-hbase4:34007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:55:35,841 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,841 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,841 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,858 INFO [RS:0;jenkins-hbase4:34007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:55:35,860 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34007,1685354134673-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:35,876 INFO [RS:0;jenkins-hbase4:34007] regionserver.Replication(203): jenkins-hbase4.apache.org,34007,1685354134673 started 2023-05-29 09:55:35,876 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34007,1685354134673, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34007, sessionid=0x100765db78d0001 2023-05-29 09:55:35,876 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:55:35,876 DEBUG [RS:0;jenkins-hbase4:34007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,876 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34007,1685354134673' 2023-05-29 09:55:35,876 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34007,1685354134673' 2023-05-29 09:55:35,877 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:55:35,878 DEBUG [RS:0;jenkins-hbase4:34007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:55:35,878 DEBUG [RS:0;jenkins-hbase4:34007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:55:35,878 INFO [RS:0;jenkins-hbase4:34007] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:55:35,879 INFO [RS:0;jenkins-hbase4:34007] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:55:35,949 DEBUG [jenkins-hbase4:33941] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 09:55:35,952 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34007,1685354134673, state=OPENING 2023-05-29 09:55:35,960 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 09:55:35,962 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:35,962 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:55:35,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34007,1685354134673}] 2023-05-29 09:55:35,989 INFO [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34007%2C1685354134673, suffix=, logDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673, archiveDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/oldWALs, maxLogs=32 2023-05-29 09:55:36,003 INFO [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354135992 2023-05-29 09:55:36,003 DEBUG [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:55:36,148 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:36,151 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 09:55:36,154 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56602, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 09:55:36,166 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 09:55:36,167 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:55:36,170 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34007%2C1685354134673.meta, suffix=.meta, logDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673, archiveDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/oldWALs, maxLogs=32 2023-05-29 09:55:36,184 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.meta.1685354136172.meta 2023-05-29 09:55:36,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:55:36,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:55:36,187 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 09:55:36,202 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 09:55:36,207 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 09:55:36,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 09:55:36,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:36,213 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 09:55:36,213 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 09:55:36,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:55:36,217 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/info 2023-05-29 09:55:36,218 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/info 2023-05-29 09:55:36,218 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:55:36,219 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:36,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:55:36,221 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:55:36,221 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:55:36,222 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:55:36,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:36,223 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:55:36,224 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/table 2023-05-29 09:55:36,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/table 2023-05-29 09:55:36,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:55:36,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:36,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740 2023-05-29 09:55:36,231 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740 2023-05-29 09:55:36,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:55:36,237 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:55:36,238 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=873205, jitterRate=0.11033818125724792}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:55:36,239 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:55:36,249 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354136140 2023-05-29 09:55:36,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 09:55:36,267 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 09:55:36,267 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34007,1685354134673, state=OPEN 2023-05-29 09:55:36,270 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 09:55:36,270 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:55:36,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 09:55:36,275 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34007,1685354134673 in 305 msec 2023-05-29 09:55:36,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 09:55:36,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 492 msec 2023-05-29 09:55:36,285 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 729 msec 2023-05-29 09:55:36,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354136286, completionTime=-1 2023-05-29 09:55:36,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 09:55:36,286 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 09:55:36,348 DEBUG [hconnection-0x473b29be-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:55:36,350 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56616, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:55:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 09:55:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354196366 2023-05-29 09:55:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354256366 2023-05-29 09:55:36,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 80 msec 2023-05-29 09:55:36,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33941,1685354133478-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:36,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33941,1685354133478-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:36,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33941,1685354133478-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33941, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:36,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 09:55:36,398 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 09:55:36,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 09:55:36,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:55:36,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 09:55:36,418 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:55:36,420 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:55:36,441 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,443 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa empty. 2023-05-29 09:55:36,444 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,444 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 09:55:36,496 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 09:55:36,498 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => cab950e5e0b5ae2d049c37bd8eaa14aa, NAME => 'hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing cab950e5e0b5ae2d049c37bd8eaa14aa, disabling compactions & flushes 2023-05-29 09:55:36,514 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. after waiting 0 ms 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,514 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,514 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for cab950e5e0b5ae2d049c37bd8eaa14aa: 2023-05-29 09:55:36,518 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:55:36,533 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354136521"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354136521"}]},"ts":"1685354136521"} 2023-05-29 09:55:36,558 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:55:36,560 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:55:36,564 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354136560"}]},"ts":"1685354136560"} 2023-05-29 09:55:36,567 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 09:55:36,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cab950e5e0b5ae2d049c37bd8eaa14aa, ASSIGN}] 2023-05-29 09:55:36,579 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=cab950e5e0b5ae2d049c37bd8eaa14aa, ASSIGN 2023-05-29 09:55:36,581 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=cab950e5e0b5ae2d049c37bd8eaa14aa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34007,1685354134673; forceNewPlan=false, retain=false 2023-05-29 09:55:36,732 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cab950e5e0b5ae2d049c37bd8eaa14aa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:36,732 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354136732"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354136732"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354136732"}]},"ts":"1685354136732"} 2023-05-29 09:55:36,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure cab950e5e0b5ae2d049c37bd8eaa14aa, server=jenkins-hbase4.apache.org,34007,1685354134673}] 2023-05-29 09:55:36,897 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cab950e5e0b5ae2d049c37bd8eaa14aa, NAME => 'hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:55:36,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:36,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,903 INFO [StoreOpener-cab950e5e0b5ae2d049c37bd8eaa14aa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,905 DEBUG [StoreOpener-cab950e5e0b5ae2d049c37bd8eaa14aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/info 2023-05-29 09:55:36,905 DEBUG [StoreOpener-cab950e5e0b5ae2d049c37bd8eaa14aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/info 2023-05-29 09:55:36,905 INFO [StoreOpener-cab950e5e0b5ae2d049c37bd8eaa14aa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cab950e5e0b5ae2d049c37bd8eaa14aa columnFamilyName info 2023-05-29 09:55:36,906 INFO [StoreOpener-cab950e5e0b5ae2d049c37bd8eaa14aa-1] regionserver.HStore(310): Store=cab950e5e0b5ae2d049c37bd8eaa14aa/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:36,908 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,908 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:55:36,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:55:36,916 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cab950e5e0b5ae2d049c37bd8eaa14aa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=787961, jitterRate=0.001945197582244873}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:55:36,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cab950e5e0b5ae2d049c37bd8eaa14aa: 2023-05-29 09:55:36,918 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa., pid=6, masterSystemTime=1685354136890 2023-05-29 09:55:36,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,922 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:55:36,924 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=cab950e5e0b5ae2d049c37bd8eaa14aa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:36,924 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354136923"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354136923"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354136923"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354136923"}]},"ts":"1685354136923"} 2023-05-29 09:55:36,933 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 09:55:36,933 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure cab950e5e0b5ae2d049c37bd8eaa14aa, server=jenkins-hbase4.apache.org,34007,1685354134673 in 192 msec 2023-05-29 09:55:36,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 09:55:36,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=cab950e5e0b5ae2d049c37bd8eaa14aa, ASSIGN in 357 msec 2023-05-29 09:55:36,938 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:55:36,939 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354136938"}]},"ts":"1685354136938"} 2023-05-29 09:55:36,941 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 09:55:36,946 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:55:36,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 540 msec 2023-05-29 09:55:37,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 09:55:37,020 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:55:37,020 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:37,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 09:55:37,072 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:55:37,077 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 30 msec 2023-05-29 09:55:37,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 09:55:37,101 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:55:37,105 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-05-29 09:55:37,114 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 09:55:37,117 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 09:55:37,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.375sec 2023-05-29 09:55:37,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 09:55:37,121 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 09:55:37,121 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 09:55:37,122 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33941,1685354133478-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 09:55:37,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33941,1685354133478-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 09:55:37,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 09:55:37,215 DEBUG [Listener at localhost/35675] zookeeper.ReadOnlyZKClient(139): Connect 0x485a27cf to 127.0.0.1:64229 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:55:37,219 DEBUG [Listener at localhost/35675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bd15f86, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:55:37,232 DEBUG [hconnection-0x8fce993-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:55:37,242 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:55:37,252 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:55:37,253 INFO [Listener at localhost/35675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:55:37,261 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 09:55:37,261 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:55:37,262 INFO [Listener at localhost/35675] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 09:55:37,270 DEBUG [Listener at localhost/35675] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 09:55:37,274 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55224, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 09:55:37,283 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 09:55:37,283 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 09:55:37,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:55:37,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-29 09:55:37,291 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:55:37,293 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:55:37,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-29 09:55:37,297 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,298 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49 empty. 2023-05-29 09:55:37,300 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,300 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-29 09:55:37,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:55:37,322 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 09:55:37,324 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => d76919a0c3f0be6bd773fa40fbddef49, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/.tmp 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing d76919a0c3f0be6bd773fa40fbddef49, disabling compactions & flushes 2023-05-29 09:55:37,338 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. after waiting 0 ms 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,338 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,338 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:55:37,342 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:55:37,344 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685354137344"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354137344"}]},"ts":"1685354137344"} 2023-05-29 09:55:37,347 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:55:37,349 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:55:37,349 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354137349"}]},"ts":"1685354137349"} 2023-05-29 09:55:37,351 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-29 09:55:37,355 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=d76919a0c3f0be6bd773fa40fbddef49, ASSIGN}] 2023-05-29 09:55:37,357 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=d76919a0c3f0be6bd773fa40fbddef49, ASSIGN 2023-05-29 09:55:37,359 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=d76919a0c3f0be6bd773fa40fbddef49, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34007,1685354134673; forceNewPlan=false, retain=false 2023-05-29 09:55:37,511 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d76919a0c3f0be6bd773fa40fbddef49, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:37,511 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685354137510"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354137510"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354137510"}]},"ts":"1685354137510"} 2023-05-29 09:55:37,515 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure d76919a0c3f0be6bd773fa40fbddef49, server=jenkins-hbase4.apache.org,34007,1685354134673}] 2023-05-29 09:55:37,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d76919a0c3f0be6bd773fa40fbddef49, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:55:37,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:55:37,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,677 INFO [StoreOpener-d76919a0c3f0be6bd773fa40fbddef49-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,680 DEBUG [StoreOpener-d76919a0c3f0be6bd773fa40fbddef49-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info 2023-05-29 09:55:37,680 DEBUG [StoreOpener-d76919a0c3f0be6bd773fa40fbddef49-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info 2023-05-29 09:55:37,680 INFO [StoreOpener-d76919a0c3f0be6bd773fa40fbddef49-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d76919a0c3f0be6bd773fa40fbddef49 columnFamilyName info 2023-05-29 09:55:37,681 INFO [StoreOpener-d76919a0c3f0be6bd773fa40fbddef49-1] regionserver.HStore(310): Store=d76919a0c3f0be6bd773fa40fbddef49/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:55:37,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:37,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:55:37,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d76919a0c3f0be6bd773fa40fbddef49; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=852013, jitterRate=0.08339075744152069}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:55:37,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:55:37,693 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49., pid=11, masterSystemTime=1685354137668 2023-05-29 09:55:37,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:37,697 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d76919a0c3f0be6bd773fa40fbddef49, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:55:37,697 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1685354137697"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354137697"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354137697"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354137697"}]},"ts":"1685354137697"} 2023-05-29 09:55:37,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 09:55:37,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure d76919a0c3f0be6bd773fa40fbddef49, server=jenkins-hbase4.apache.org,34007,1685354134673 in 186 msec 2023-05-29 09:55:37,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 09:55:37,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=d76919a0c3f0be6bd773fa40fbddef49, ASSIGN in 349 msec 2023-05-29 09:55:37,709 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:55:37,710 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354137709"}]},"ts":"1685354137709"} 2023-05-29 09:55:37,712 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-29 09:55:37,715 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:55:37,717 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 428 msec 2023-05-29 09:55:41,723 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-29 09:55:41,803 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 09:55:41,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 09:55:41,805 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-29 09:55:43,637 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 09:55:43,638 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-29 09:55:47,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33941] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:55:47,314 INFO [Listener at localhost/35675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-29 09:55:47,318 DEBUG [Listener at localhost/35675] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-29 09:55:47,319 DEBUG [Listener at localhost/35675] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:55:59,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34007] regionserver.HRegion(9158): Flush requested on d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:55:59,347 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d76919a0c3f0be6bd773fa40fbddef49 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:55:59,416 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/35d6de048d834629baa6465760a95b01 2023-05-29 09:55:59,465 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/35d6de048d834629baa6465760a95b01 as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01 2023-05-29 09:55:59,478 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01, entries=7, sequenceid=11, filesize=12.1 K 2023-05-29 09:55:59,480 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for d76919a0c3f0be6bd773fa40fbddef49 in 133ms, sequenceid=11, compaction requested=false 2023-05-29 09:55:59,482 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:07,559 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:09,763 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:11,966 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:14,170 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:14,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34007] regionserver.HRegion(9158): Flush requested on d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:56:14,170 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d76919a0c3f0be6bd773fa40fbddef49 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:56:14,372 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:14,391 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/aadf194e6c694d3daaca467cd376d365 2023-05-29 09:56:14,402 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/aadf194e6c694d3daaca467cd376d365 as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365 2023-05-29 09:56:14,409 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365, entries=7, sequenceid=21, filesize=12.1 K 2023-05-29 09:56:14,611 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:14,612 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for d76919a0c3f0be6bd773fa40fbddef49 in 441ms, sequenceid=21, compaction requested=false 2023-05-29 09:56:14,612 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:14,612 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-29 09:56:14,612 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:56:14,613 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01 because midkey is the same as first or last row 2023-05-29 09:56:16,373 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:18,576 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:18,577 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34007%2C1685354134673:(num 1685354135992) roll requested 2023-05-29 09:56:18,577 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:18,789 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:18,791 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354135992 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354178577 2023-05-29 09:56:18,792 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:18,792 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354135992 is not closed yet, will try archiving it next time 2023-05-29 09:56:28,590 INFO [Listener at localhost/35675] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 09:56:33,593 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:33,593 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:33,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34007] regionserver.HRegion(9158): Flush requested on d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:56:33,593 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C34007%2C1685354134673:(num 1685354178577) roll requested 2023-05-29 09:56:33,593 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d76919a0c3f0be6bd773fa40fbddef49 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:56:35,594 INFO [Listener at localhost/35675] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 09:56:38,595 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:38,595 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:38,607 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:38,608 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:38,609 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354178577 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354193593 2023-05-29 09:56:38,609 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42027,DS-b1dfcd79-244c-476b-a878-49ff1c2604a1,DISK], DatanodeInfoWithStorage[127.0.0.1:43683,DS-8e0540da-874c-4c27-8324-5e7d6a7e27cc,DISK]] 2023-05-29 09:56:38,609 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673/jenkins-hbase4.apache.org%2C34007%2C1685354134673.1685354178577 is not closed yet, will try archiving it next time 2023-05-29 09:56:38,618 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/94043681239a4590bb9b6aefbc4135f5 2023-05-29 09:56:38,630 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/94043681239a4590bb9b6aefbc4135f5 as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5 2023-05-29 09:56:38,639 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5, entries=7, sequenceid=31, filesize=12.1 K 2023-05-29 09:56:38,643 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for d76919a0c3f0be6bd773fa40fbddef49 in 5050ms, sequenceid=31, compaction requested=true 2023-05-29 09:56:38,643 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:38,643 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-29 09:56:38,643 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:56:38,644 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01 because midkey is the same as first or last row 2023-05-29 09:56:38,646 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 09:56:38,646 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 09:56:38,651 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 09:56:38,653 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HStore(1912): d76919a0c3f0be6bd773fa40fbddef49/info is initiating minor compaction (all files) 2023-05-29 09:56:38,654 INFO [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of d76919a0c3f0be6bd773fa40fbddef49/info in TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:38,654 INFO [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5] into tmpdir=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp, totalSize=36.3 K 2023-05-29 09:56:38,656 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] compactions.Compactor(207): Compacting 35d6de048d834629baa6465760a95b01, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685354147324 2023-05-29 09:56:38,656 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] compactions.Compactor(207): Compacting aadf194e6c694d3daaca467cd376d365, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1685354161348 2023-05-29 09:56:38,657 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] compactions.Compactor(207): Compacting 94043681239a4590bb9b6aefbc4135f5, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1685354176171 2023-05-29 09:56:38,689 INFO [RS:0;jenkins-hbase4:34007-shortCompactions-0] throttle.PressureAwareThroughputController(145): d76919a0c3f0be6bd773fa40fbddef49#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 09:56:38,715 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/e3edf8b9db914adb9ee1267182101428 as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/e3edf8b9db914adb9ee1267182101428 2023-05-29 09:56:38,734 INFO [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in d76919a0c3f0be6bd773fa40fbddef49/info of d76919a0c3f0be6bd773fa40fbddef49 into e3edf8b9db914adb9ee1267182101428(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 09:56:38,735 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:38,735 INFO [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49., storeName=d76919a0c3f0be6bd773fa40fbddef49/info, priority=13, startTime=1685354198646; duration=0sec 2023-05-29 09:56:38,736 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-29 09:56:38,736 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:56:38,737 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/e3edf8b9db914adb9ee1267182101428 because midkey is the same as first or last row 2023-05-29 09:56:38,737 DEBUG [RS:0;jenkins-hbase4:34007-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 09:56:50,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34007] regionserver.HRegion(9158): Flush requested on d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:56:50,720 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing d76919a0c3f0be6bd773fa40fbddef49 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:56:50,737 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/fe01e54a335845e686cf4de20b94abbc 2023-05-29 09:56:50,746 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/fe01e54a335845e686cf4de20b94abbc as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/fe01e54a335845e686cf4de20b94abbc 2023-05-29 09:56:50,753 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/fe01e54a335845e686cf4de20b94abbc, entries=7, sequenceid=42, filesize=12.1 K 2023-05-29 09:56:50,754 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for d76919a0c3f0be6bd773fa40fbddef49 in 34ms, sequenceid=42, compaction requested=false 2023-05-29 09:56:50,755 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:50,755 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-29 09:56:50,755 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:56:50,755 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/e3edf8b9db914adb9ee1267182101428 because midkey is the same as first or last row 2023-05-29 09:56:58,729 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 09:56:58,729 INFO [Listener at localhost/35675] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 09:56:58,730 DEBUG [Listener at localhost/35675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x485a27cf to 127.0.0.1:64229 2023-05-29 09:56:58,730 DEBUG [Listener at localhost/35675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:56:58,730 DEBUG [Listener at localhost/35675] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 09:56:58,731 DEBUG [Listener at localhost/35675] util.JVMClusterUtil(257): Found active master hash=356485323, stopped=false 2023-05-29 09:56:58,731 INFO [Listener at localhost/35675] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:56:58,733 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:56:58,733 INFO [Listener at localhost/35675] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 09:56:58,733 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:56:58,733 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:56:58,734 DEBUG [Listener at localhost/35675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ca54718 to 127.0.0.1:64229 2023-05-29 09:56:58,734 DEBUG [Listener at localhost/35675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:56:58,735 INFO [Listener at localhost/35675] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,34007,1685354134673' ***** 2023-05-29 09:56:58,735 INFO [Listener at localhost/35675] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 09:56:58,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:56:58,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:56:58,735 INFO [RS:0;jenkins-hbase4:34007] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 09:56:58,735 INFO [RS:0;jenkins-hbase4:34007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 09:56:58,735 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 09:56:58,735 INFO [RS:0;jenkins-hbase4:34007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 09:56:58,736 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(3303): Received CLOSE for cab950e5e0b5ae2d049c37bd8eaa14aa 2023-05-29 09:56:58,736 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(3303): Received CLOSE for d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:56:58,737 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:56:58,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cab950e5e0b5ae2d049c37bd8eaa14aa, disabling compactions & flushes 2023-05-29 09:56:58,737 DEBUG [RS:0;jenkins-hbase4:34007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b2afcce to 127.0.0.1:64229 2023-05-29 09:56:58,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:56:58,737 DEBUG [RS:0;jenkins-hbase4:34007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:56:58,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:56:58,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. after waiting 0 ms 2023-05-29 09:56:58,738 INFO [RS:0;jenkins-hbase4:34007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 09:56:58,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:56:58,738 INFO [RS:0;jenkins-hbase4:34007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 09:56:58,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cab950e5e0b5ae2d049c37bd8eaa14aa 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 09:56:58,738 INFO [RS:0;jenkins-hbase4:34007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 09:56:58,738 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:56:58,738 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 09:56:58,738 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, cab950e5e0b5ae2d049c37bd8eaa14aa=hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa., d76919a0c3f0be6bd773fa40fbddef49=TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.} 2023-05-29 09:56:58,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:56:58,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:56:58,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:56:58,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:56:58,739 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:56:58,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-29 09:56:58,740 DEBUG [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1504): Waiting on 1588230740, cab950e5e0b5ae2d049c37bd8eaa14aa, d76919a0c3f0be6bd773fa40fbddef49 2023-05-29 09:56:58,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/.tmp/info/3e88c9e87ee14382b9503d852f7689df 2023-05-29 09:56:58,766 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/.tmp/info/c55855c0358c457394120f8a954b423a 2023-05-29 09:56:58,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/.tmp/info/3e88c9e87ee14382b9503d852f7689df as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/info/3e88c9e87ee14382b9503d852f7689df 2023-05-29 09:56:58,782 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/info/3e88c9e87ee14382b9503d852f7689df, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 09:56:58,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for cab950e5e0b5ae2d049c37bd8eaa14aa in 46ms, sequenceid=6, compaction requested=false 2023-05-29 09:56:58,788 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/.tmp/table/5707307c5096461e94324bcfc057604e 2023-05-29 09:56:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/namespace/cab950e5e0b5ae2d049c37bd8eaa14aa/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 09:56:58,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cab950e5e0b5ae2d049c37bd8eaa14aa: 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685354136405.cab950e5e0b5ae2d049c37bd8eaa14aa. 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d76919a0c3f0be6bd773fa40fbddef49, disabling compactions & flushes 2023-05-29 09:56:58,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. after waiting 0 ms 2023-05-29 09:56:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:58,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d76919a0c3f0be6bd773fa40fbddef49 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-29 09:56:58,797 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/.tmp/info/c55855c0358c457394120f8a954b423a as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/info/c55855c0358c457394120f8a954b423a 2023-05-29 09:56:58,807 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/info/c55855c0358c457394120f8a954b423a, entries=20, sequenceid=14, filesize=7.4 K 2023-05-29 09:56:58,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/.tmp/table/5707307c5096461e94324bcfc057604e as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/table/5707307c5096461e94324bcfc057604e 2023-05-29 09:56:58,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/c8337854b7d24efb86678919b633215b 2023-05-29 09:56:58,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/.tmp/info/c8337854b7d24efb86678919b633215b as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/c8337854b7d24efb86678919b633215b 2023-05-29 09:56:58,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/table/5707307c5096461e94324bcfc057604e, entries=4, sequenceid=14, filesize=4.8 K 2023-05-29 09:56:58,819 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2934, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 80ms, sequenceid=14, compaction requested=false 2023-05-29 09:56:58,828 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-29 09:56:58,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/c8337854b7d24efb86678919b633215b, entries=3, sequenceid=48, filesize=7.9 K 2023-05-29 09:56:58,830 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 09:56:58,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:56:58,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for d76919a0c3f0be6bd773fa40fbddef49 in 37ms, sequenceid=48, compaction requested=true 2023-05-29 09:56:58,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:56:58,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 09:56:58,834 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5] to archive 2023-05-29 09:56:58,836 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 09:56:58,842 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01 to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/archive/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/35d6de048d834629baa6465760a95b01 2023-05-29 09:56:58,844 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365 to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/archive/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/aadf194e6c694d3daaca467cd376d365 2023-05-29 09:56:58,845 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5 to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/archive/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/info/94043681239a4590bb9b6aefbc4135f5 2023-05-29 09:56:58,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/data/default/TestLogRolling-testSlowSyncLogRolling/d76919a0c3f0be6bd773fa40fbddef49/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-29 09:56:58,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:58,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d76919a0c3f0be6bd773fa40fbddef49: 2023-05-29 09:56:58,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1685354137283.d76919a0c3f0be6bd773fa40fbddef49. 2023-05-29 09:56:58,940 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 09:56:58,941 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34007,1685354134673; all regions closed. 2023-05-29 09:56:58,941 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 09:56:58,942 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:56:58,950 DEBUG [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/oldWALs 2023-05-29 09:56:58,950 INFO [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34007%2C1685354134673.meta:.meta(num 1685354136172) 2023-05-29 09:56:58,950 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/WALs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:56:58,960 DEBUG [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/oldWALs 2023-05-29 09:56:58,960 INFO [RS:0;jenkins-hbase4:34007] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C34007%2C1685354134673:(num 1685354193593) 2023-05-29 09:56:58,960 DEBUG [RS:0;jenkins-hbase4:34007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:56:58,960 INFO [RS:0;jenkins-hbase4:34007] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:56:58,960 INFO [RS:0;jenkins-hbase4:34007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 09:56:58,960 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:56:58,961 INFO [RS:0;jenkins-hbase4:34007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34007 2023-05-29 09:56:58,967 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34007,1685354134673 2023-05-29 09:56:58,967 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:56:58,968 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:56:58,969 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34007,1685354134673] 2023-05-29 09:56:58,969 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34007,1685354134673; numProcessing=1 2023-05-29 09:56:58,972 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34007,1685354134673 already deleted, retry=false 2023-05-29 09:56:58,972 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34007,1685354134673 expired; onlineServers=0 2023-05-29 09:56:58,972 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33941,1685354133478' ***** 2023-05-29 09:56:58,972 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 09:56:58,972 DEBUG [M:0;jenkins-hbase4:33941] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@540282f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:56:58,972 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:56:58,972 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33941,1685354133478; all regions closed. 2023-05-29 09:56:58,972 DEBUG [M:0;jenkins-hbase4:33941] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:56:58,973 DEBUG [M:0;jenkins-hbase4:33941] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 09:56:58,973 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 09:56:58,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354135686] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354135686,5,FailOnTimeoutGroup] 2023-05-29 09:56:58,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354135686] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354135686,5,FailOnTimeoutGroup] 2023-05-29 09:56:58,973 DEBUG [M:0;jenkins-hbase4:33941] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 09:56:58,975 INFO [M:0;jenkins-hbase4:33941] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 09:56:58,975 INFO [M:0;jenkins-hbase4:33941] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 09:56:58,975 INFO [M:0;jenkins-hbase4:33941] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 09:56:58,975 DEBUG [M:0;jenkins-hbase4:33941] master.HMaster(1512): Stopping service threads 2023-05-29 09:56:58,975 INFO [M:0;jenkins-hbase4:33941] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 09:56:58,976 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 09:56:58,976 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:56:58,976 INFO [M:0;jenkins-hbase4:33941] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 09:56:58,976 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 09:56:58,976 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:56:58,977 DEBUG [M:0;jenkins-hbase4:33941] zookeeper.ZKUtil(398): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 09:56:58,977 WARN [M:0;jenkins-hbase4:33941] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 09:56:58,977 INFO [M:0;jenkins-hbase4:33941] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 09:56:58,977 INFO [M:0;jenkins-hbase4:33941] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 09:56:58,977 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:56:58,977 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:56:58,977 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:56:58,978 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:56:58,978 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:56:58,978 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.28 KB heapSize=46.71 KB 2023-05-29 09:56:58,994 INFO [M:0;jenkins-hbase4:33941] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.28 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/11d77423538d4af7be5a3adf9c04231a 2023-05-29 09:56:59,001 INFO [M:0;jenkins-hbase4:33941] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 11d77423538d4af7be5a3adf9c04231a 2023-05-29 09:56:59,002 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/11d77423538d4af7be5a3adf9c04231a as hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/11d77423538d4af7be5a3adf9c04231a 2023-05-29 09:56:59,009 INFO [M:0;jenkins-hbase4:33941] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 11d77423538d4af7be5a3adf9c04231a 2023-05-29 09:56:59,010 INFO [M:0;jenkins-hbase4:33941] regionserver.HStore(1080): Added hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/11d77423538d4af7be5a3adf9c04231a, entries=11, sequenceid=100, filesize=6.1 K 2023-05-29 09:56:59,011 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegion(2948): Finished flush of dataSize ~38.28 KB/39196, heapSize ~46.70 KB/47816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=100, compaction requested=false 2023-05-29 09:56:59,012 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:56:59,013 DEBUG [M:0;jenkins-hbase4:33941] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:56:59,013 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/MasterData/WALs/jenkins-hbase4.apache.org,33941,1685354133478 2023-05-29 09:56:59,017 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:56:59,017 INFO [M:0;jenkins-hbase4:33941] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 09:56:59,018 INFO [M:0;jenkins-hbase4:33941] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33941 2023-05-29 09:56:59,023 DEBUG [M:0;jenkins-hbase4:33941] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33941,1685354133478 already deleted, retry=false 2023-05-29 09:56:59,069 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:56:59,069 INFO [RS:0;jenkins-hbase4:34007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34007,1685354134673; zookeeper connection closed. 2023-05-29 09:56:59,069 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): regionserver:34007-0x100765db78d0001, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:56:59,070 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@176d1be6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@176d1be6 2023-05-29 09:56:59,070 INFO [Listener at localhost/35675] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 09:56:59,169 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:56:59,169 INFO [M:0;jenkins-hbase4:33941] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33941,1685354133478; zookeeper connection closed. 2023-05-29 09:56:59,170 DEBUG [Listener at localhost/35675-EventThread] zookeeper.ZKWatcher(600): master:33941-0x100765db78d0000, quorum=127.0.0.1:64229, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:56:59,172 WARN [Listener at localhost/35675] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:56:59,176 INFO [Listener at localhost/35675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:56:59,281 WARN [BP-1303580188-172.31.14.131-1685354130477 heartbeating to localhost/127.0.0.1:37765] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:56:59,281 WARN [BP-1303580188-172.31.14.131-1685354130477 heartbeating to localhost/127.0.0.1:37765] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1303580188-172.31.14.131-1685354130477 (Datanode Uuid d54c6923-995b-485a-b933-13994816110c) service to localhost/127.0.0.1:37765 2023-05-29 09:56:59,283 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/dfs/data/data3/current/BP-1303580188-172.31.14.131-1685354130477] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:56:59,284 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/dfs/data/data4/current/BP-1303580188-172.31.14.131-1685354130477] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:56:59,284 WARN [Listener at localhost/35675] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:56:59,286 INFO [Listener at localhost/35675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:56:59,417 WARN [BP-1303580188-172.31.14.131-1685354130477 heartbeating to localhost/127.0.0.1:37765] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:56:59,417 WARN [BP-1303580188-172.31.14.131-1685354130477 heartbeating to localhost/127.0.0.1:37765] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1303580188-172.31.14.131-1685354130477 (Datanode Uuid bfed62e7-27a5-4131-b1e6-915482b52f56) service to localhost/127.0.0.1:37765 2023-05-29 09:56:59,418 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/dfs/data/data1/current/BP-1303580188-172.31.14.131-1685354130477] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:56:59,419 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/cluster_3cb94ba8-c168-c4fc-81f5-ee3d69fe1cde/dfs/data/data2/current/BP-1303580188-172.31.14.131-1685354130477] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:56:59,454 INFO [Listener at localhost/35675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:56:59,566 INFO [Listener at localhost/35675] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 09:56:59,600 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 09:56:59,612 INFO [Listener at localhost/35675] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=51 (was 10) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost:37765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@6527199b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37765 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: regionserver/jenkins-hbase4:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/35675 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37765 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37765 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=436 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=94 (was 258), ProcessCount=168 (was 172), AvailableMemoryMB=3842 (was 4556) 2023-05-29 09:56:59,620 INFO [Listener at localhost/35675] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=52, OpenFileDescriptor=436, MaxFileDescriptor=60000, SystemLoadAverage=94, ProcessCount=168, AvailableMemoryMB=3842 2023-05-29 09:56:59,620 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/hadoop.log.dir so I do NOT create it in target/test-data/aced4540-2d03-8fb3-f693-c7538693e134 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/09db8984-fa54-d170-718f-25a700504923/hadoop.tmp.dir so I do NOT create it in target/test-data/aced4540-2d03-8fb3-f693-c7538693e134 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653, deleteOnExit=true 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/test.cache.data in system properties and HBase conf 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/hadoop.log.dir in system properties and HBase conf 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 09:56:59,621 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 09:56:59,622 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 09:56:59,622 DEBUG [Listener at localhost/35675] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 09:56:59,622 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:56:59,622 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:56:59,622 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 09:56:59,622 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/nfs.dump.dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 09:56:59,623 INFO [Listener at localhost/35675] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 09:56:59,625 WARN [Listener at localhost/35675] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:56:59,628 WARN [Listener at localhost/35675] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:56:59,628 WARN [Listener at localhost/35675] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:56:59,669 WARN [Listener at localhost/35675] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:56:59,672 INFO [Listener at localhost/35675] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:56:59,677 INFO [Listener at localhost/35675] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_43685_hdfs____.ioqj5u/webapp 2023-05-29 09:56:59,770 INFO [Listener at localhost/35675] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43685 2023-05-29 09:56:59,772 WARN [Listener at localhost/35675] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:56:59,775 WARN [Listener at localhost/35675] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:56:59,775 WARN [Listener at localhost/35675] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:56:59,824 WARN [Listener at localhost/37205] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:56:59,836 WARN [Listener at localhost/37205] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:56:59,840 WARN [Listener at localhost/37205] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:56:59,841 INFO [Listener at localhost/37205] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:56:59,845 INFO [Listener at localhost/37205] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_43243_datanode____.abrwlj/webapp 2023-05-29 09:56:59,846 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:56:59,936 INFO [Listener at localhost/37205] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43243 2023-05-29 09:56:59,943 WARN [Listener at localhost/35377] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:56:59,959 WARN [Listener at localhost/35377] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:56:59,961 WARN [Listener at localhost/35377] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:56:59,962 INFO [Listener at localhost/35377] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:56:59,966 INFO [Listener at localhost/35377] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_37335_datanode____s4ylp4/webapp 2023-05-29 09:57:00,071 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa79143f7993f4ae0: Processing first storage report for DS-0a05775c-9b96-454e-b909-50ff6f0a6a71 from datanode b4f5e773-d571-460e-8a43-c9ea44608b27 2023-05-29 09:57:00,071 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa79143f7993f4ae0: from storage DS-0a05775c-9b96-454e-b909-50ff6f0a6a71 node DatanodeRegistration(127.0.0.1:37199, datanodeUuid=b4f5e773-d571-460e-8a43-c9ea44608b27, infoPort=44781, infoSecurePort=0, ipcPort=35377, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:00,072 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa79143f7993f4ae0: Processing first storage report for DS-f5856f91-e14e-440e-9baa-bcb86cfe21c6 from datanode b4f5e773-d571-460e-8a43-c9ea44608b27 2023-05-29 09:57:00,072 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa79143f7993f4ae0: from storage DS-f5856f91-e14e-440e-9baa-bcb86cfe21c6 node DatanodeRegistration(127.0.0.1:37199, datanodeUuid=b4f5e773-d571-460e-8a43-c9ea44608b27, infoPort=44781, infoSecurePort=0, ipcPort=35377, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:00,079 INFO [Listener at localhost/35377] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37335 2023-05-29 09:57:00,090 WARN [Listener at localhost/42131] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:00,202 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x54732d584a14a991: Processing first storage report for DS-45ed997c-9412-40aa-9d81-5dca286cb8c2 from datanode 99ce3b19-f876-40a5-a16a-a974d4c7db06 2023-05-29 09:57:00,202 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x54732d584a14a991: from storage DS-45ed997c-9412-40aa-9d81-5dca286cb8c2 node DatanodeRegistration(127.0.0.1:36869, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34539, infoSecurePort=0, ipcPort=42131, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:00,202 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x54732d584a14a991: Processing first storage report for DS-7de774b4-ac3a-463c-accc-d6f469853008 from datanode 99ce3b19-f876-40a5-a16a-a974d4c7db06 2023-05-29 09:57:00,202 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x54732d584a14a991: from storage DS-7de774b4-ac3a-463c-accc-d6f469853008 node DatanodeRegistration(127.0.0.1:36869, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34539, infoSecurePort=0, ipcPort=42131, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:00,206 DEBUG [Listener at localhost/42131] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134 2023-05-29 09:57:00,219 INFO [Listener at localhost/42131] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/zookeeper_0, clientPort=58162, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 09:57:00,221 INFO [Listener at localhost/42131] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58162 2023-05-29 09:57:00,221 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,225 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,247 INFO [Listener at localhost/42131] util.FSUtils(471): Created version file at hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8 with version=8 2023-05-29 09:57:00,247 INFO [Listener at localhost/42131] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 09:57:00,249 INFO [Listener at localhost/42131] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:57:00,249 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,249 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,250 INFO [Listener at localhost/42131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:57:00,250 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,250 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:57:00,250 INFO [Listener at localhost/42131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:57:00,252 INFO [Listener at localhost/42131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38697 2023-05-29 09:57:00,252 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,253 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,254 INFO [Listener at localhost/42131] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38697 connecting to ZooKeeper ensemble=127.0.0.1:58162 2023-05-29 09:57:00,266 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:386970x0, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:57:00,267 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38697-0x100765f0db70000 connected 2023-05-29 09:57:00,294 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:57:00,294 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:00,295 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:57:00,296 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38697 2023-05-29 09:57:00,298 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38697 2023-05-29 09:57:00,299 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38697 2023-05-29 09:57:00,300 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38697 2023-05-29 09:57:00,302 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38697 2023-05-29 09:57:00,303 INFO [Listener at localhost/42131] master.HMaster(444): hbase.rootdir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8, hbase.cluster.distributed=false 2023-05-29 09:57:00,319 INFO [Listener at localhost/42131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:57:00,320 INFO [Listener at localhost/42131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:57:00,321 INFO [Listener at localhost/42131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44395 2023-05-29 09:57:00,322 INFO [Listener at localhost/42131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:57:00,323 DEBUG [Listener at localhost/42131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:57:00,324 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,325 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,326 INFO [Listener at localhost/42131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44395 connecting to ZooKeeper ensemble=127.0.0.1:58162 2023-05-29 09:57:00,329 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:443950x0, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:57:00,330 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): regionserver:443950x0, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:57:00,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44395-0x100765f0db70001 connected 2023-05-29 09:57:00,331 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:00,331 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:57:00,333 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44395 2023-05-29 09:57:00,333 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44395 2023-05-29 09:57:00,334 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44395 2023-05-29 09:57:00,339 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44395 2023-05-29 09:57:00,339 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44395 2023-05-29 09:57:00,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,341 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:57:00,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,343 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:57:00,343 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:57:00,344 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:57:00,345 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38697,1685354220249 from backup master directory 2023-05-29 09:57:00,345 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:57:00,347 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,347 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:57:00,347 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:57:00,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/hbase.id with ID: e976b9c3-51b4-465a-9703-6f8de4aa513d 2023-05-29 09:57:00,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:00,380 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5f6c4cf2 to 127.0.0.1:58162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:00,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59623ee5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:00,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:00,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 09:57:00,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:00,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store-tmp 2023-05-29 09:57:00,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:00,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:57:00,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:00,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:00,404 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:57:00,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:00,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:00,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:57:00,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38697%2C1685354220249, suffix=, logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249, archiveDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/oldWALs, maxLogs=10 2023-05-29 09:57:00,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354220408 2023-05-29 09:57:00,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] 2023-05-29 09:57:00,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:00,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:00,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,417 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,419 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 09:57:00,420 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 09:57:00,420 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:00,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:00,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=708425, jitterRate=-0.09919145703315735}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:00,427 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:57:00,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 09:57:00,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 09:57:00,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 09:57:00,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 09:57:00,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 09:57:00,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 09:57:00,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 09:57:00,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 09:57:00,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 09:57:00,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 09:57:00,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 09:57:00,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 09:57:00,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 09:57:00,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 09:57:00,453 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 09:57:00,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 09:57:00,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 09:57:00,459 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:00,459 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:00,459 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38697,1685354220249, sessionid=0x100765f0db70000, setting cluster-up flag (Was=false) 2023-05-29 09:57:00,464 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 09:57:00,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,474 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 09:57:00,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:00,481 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.hbase-snapshot/.tmp 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:57:00,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354250485 2023-05-29 09:57:00,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 09:57:00,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 09:57:00,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 09:57:00,487 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:00,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 09:57:00,487 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 09:57:00,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 09:57:00,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354220487,5,FailOnTimeoutGroup] 2023-05-29 09:57:00,488 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:00,491 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354220487,5,FailOnTimeoutGroup] 2023-05-29 09:57:00,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 09:57:00,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,504 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:00,505 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:00,505 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8 2023-05-29 09:57:00,515 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:00,516 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:57:00,517 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/info 2023-05-29 09:57:00,518 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:57:00,519 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,519 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:57:00,520 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:00,520 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:57:00,521 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,521 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:57:00,522 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/table 2023-05-29 09:57:00,523 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:57:00,524 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,525 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740 2023-05-29 09:57:00,525 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740 2023-05-29 09:57:00,528 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:57:00,529 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:57:00,531 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:00,532 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=840958, jitterRate=0.06933411955833435}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:57:00,532 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:57:00,532 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:57:00,532 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:57:00,534 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:00,534 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 09:57:00,534 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 09:57:00,535 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 09:57:00,537 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 09:57:00,541 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(951): ClusterId : e976b9c3-51b4-465a-9703-6f8de4aa513d 2023-05-29 09:57:00,541 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:57:00,544 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:57:00,544 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:57:00,550 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:57:00,551 DEBUG [RS:0;jenkins-hbase4:44395] zookeeper.ReadOnlyZKClient(139): Connect 0x4839c92c to 127.0.0.1:58162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:00,555 DEBUG [RS:0;jenkins-hbase4:44395] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46fe3e5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:00,555 DEBUG [RS:0;jenkins-hbase4:44395] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42c1a25c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:57:00,564 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44395 2023-05-29 09:57:00,564 INFO [RS:0;jenkins-hbase4:44395] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:57:00,564 INFO [RS:0;jenkins-hbase4:44395] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:57:00,564 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:57:00,565 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,38697,1685354220249 with isa=jenkins-hbase4.apache.org/172.31.14.131:44395, startcode=1685354220319 2023-05-29 09:57:00,565 DEBUG [RS:0;jenkins-hbase4:44395] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:57:00,568 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41081, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:57:00,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,570 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8 2023-05-29 09:57:00,570 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37205 2023-05-29 09:57:00,570 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:57:00,572 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:00,572 DEBUG [RS:0;jenkins-hbase4:44395] zookeeper.ZKUtil(162): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,572 WARN [RS:0;jenkins-hbase4:44395] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:57:00,572 INFO [RS:0;jenkins-hbase4:44395] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:00,572 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,573 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44395,1685354220319] 2023-05-29 09:57:00,576 DEBUG [RS:0;jenkins-hbase4:44395] zookeeper.ZKUtil(162): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,577 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:57:00,578 INFO [RS:0;jenkins-hbase4:44395] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:57:00,580 INFO [RS:0;jenkins-hbase4:44395] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:57:00,580 INFO [RS:0;jenkins-hbase4:44395] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:57:00,580 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,580 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:57:00,581 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,582 DEBUG [RS:0;jenkins-hbase4:44395] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:00,587 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,587 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,587 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,598 INFO [RS:0;jenkins-hbase4:44395] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:57:00,598 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44395,1685354220319-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,609 INFO [RS:0;jenkins-hbase4:44395] regionserver.Replication(203): jenkins-hbase4.apache.org,44395,1685354220319 started 2023-05-29 09:57:00,610 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44395,1685354220319, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44395, sessionid=0x100765f0db70001 2023-05-29 09:57:00,610 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:57:00,610 DEBUG [RS:0;jenkins-hbase4:44395] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,610 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44395,1685354220319' 2023-05-29 09:57:00,610 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:57:00,610 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44395,1685354220319' 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:57:00,611 DEBUG [RS:0;jenkins-hbase4:44395] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:57:00,612 DEBUG [RS:0;jenkins-hbase4:44395] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:57:00,612 INFO [RS:0;jenkins-hbase4:44395] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:57:00,612 INFO [RS:0;jenkins-hbase4:44395] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:57:00,687 DEBUG [jenkins-hbase4:38697] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 09:57:00,688 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44395,1685354220319, state=OPENING 2023-05-29 09:57:00,690 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 09:57:00,692 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:00,693 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44395,1685354220319}] 2023-05-29 09:57:00,693 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:57:00,714 INFO [RS:0;jenkins-hbase4:44395] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44395%2C1685354220319, suffix=, logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319, archiveDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/oldWALs, maxLogs=32 2023-05-29 09:57:00,723 INFO [RS:0;jenkins-hbase4:44395] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354220715 2023-05-29 09:57:00,723 DEBUG [RS:0;jenkins-hbase4:44395] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] 2023-05-29 09:57:00,848 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:00,848 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 09:57:00,851 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44872, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 09:57:00,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 09:57:00,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:00,857 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta, suffix=.meta, logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319, archiveDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/oldWALs, maxLogs=32 2023-05-29 09:57:00,867 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354220858.meta 2023-05-29 09:57:00,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK], DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]] 2023-05-29 09:57:00,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:00,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 09:57:00,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 09:57:00,869 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 09:57:00,869 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 09:57:00,869 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:00,869 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 09:57:00,869 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 09:57:00,871 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:57:00,872 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/info 2023-05-29 09:57:00,872 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/info 2023-05-29 09:57:00,872 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:57:00,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,873 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:57:00,874 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:00,874 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:00,874 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:57:00,875 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,875 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:57:00,876 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/table 2023-05-29 09:57:00,876 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740/table 2023-05-29 09:57:00,877 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:57:00,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:00,879 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740 2023-05-29 09:57:00,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/meta/1588230740 2023-05-29 09:57:00,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:57:00,884 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:57:00,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=778566, jitterRate=-0.010002166032791138}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:57:00,885 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:57:00,887 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354220848 2023-05-29 09:57:00,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 09:57:00,890 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 09:57:00,891 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44395,1685354220319, state=OPEN 2023-05-29 09:57:00,893 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 09:57:00,893 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:57:00,896 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 09:57:00,896 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44395,1685354220319 in 200 msec 2023-05-29 09:57:00,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 09:57:00,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-05-29 09:57:00,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 418 msec 2023-05-29 09:57:00,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354220901, completionTime=-1 2023-05-29 09:57:00,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 09:57:00,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 09:57:00,904 DEBUG [hconnection-0x60a9489-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:57:00,906 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44876, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:57:00,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 09:57:00,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354280907 2023-05-29 09:57:00,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354340907 2023-05-29 09:57:00,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-29 09:57:00,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38697,1685354220249-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38697,1685354220249-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38697,1685354220249-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38697, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:00,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 09:57:00,914 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:00,915 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 09:57:00,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 09:57:00,917 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:57:00,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:57:00,919 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:00,920 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7 empty. 2023-05-29 09:57:00,920 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:00,920 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 09:57:00,933 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:00,934 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 992e98307828bc5a28731c7cdf1f58a7, NAME => 'hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp 2023-05-29 09:57:00,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:00,942 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 992e98307828bc5a28731c7cdf1f58a7, disabling compactions & flushes 2023-05-29 09:57:00,943 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:00,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:00,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. after waiting 0 ms 2023-05-29 09:57:00,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:00,943 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:00,943 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 992e98307828bc5a28731c7cdf1f58a7: 2023-05-29 09:57:00,946 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:57:00,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354220947"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354220947"}]},"ts":"1685354220947"} 2023-05-29 09:57:00,949 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:57:00,951 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:57:00,951 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354220951"}]},"ts":"1685354220951"} 2023-05-29 09:57:00,953 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 09:57:00,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=992e98307828bc5a28731c7cdf1f58a7, ASSIGN}] 2023-05-29 09:57:00,961 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=992e98307828bc5a28731c7cdf1f58a7, ASSIGN 2023-05-29 09:57:00,962 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=992e98307828bc5a28731c7cdf1f58a7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44395,1685354220319; forceNewPlan=false, retain=false 2023-05-29 09:57:01,113 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=992e98307828bc5a28731c7cdf1f58a7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,114 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354221113"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354221113"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354221113"}]},"ts":"1685354221113"} 2023-05-29 09:57:01,116 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 992e98307828bc5a28731c7cdf1f58a7, server=jenkins-hbase4.apache.org,44395,1685354220319}] 2023-05-29 09:57:01,274 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:01,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 992e98307828bc5a28731c7cdf1f58a7, NAME => 'hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:01,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:01,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,276 INFO [StoreOpener-992e98307828bc5a28731c7cdf1f58a7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,278 DEBUG [StoreOpener-992e98307828bc5a28731c7cdf1f58a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/info 2023-05-29 09:57:01,278 DEBUG [StoreOpener-992e98307828bc5a28731c7cdf1f58a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/info 2023-05-29 09:57:01,278 INFO [StoreOpener-992e98307828bc5a28731c7cdf1f58a7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 992e98307828bc5a28731c7cdf1f58a7 columnFamilyName info 2023-05-29 09:57:01,279 INFO [StoreOpener-992e98307828bc5a28731c7cdf1f58a7-1] regionserver.HStore(310): Store=992e98307828bc5a28731c7cdf1f58a7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:01,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,284 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:01,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:01,287 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 992e98307828bc5a28731c7cdf1f58a7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=866588, jitterRate=0.10192449390888214}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:01,287 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 992e98307828bc5a28731c7cdf1f58a7: 2023-05-29 09:57:01,289 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7., pid=6, masterSystemTime=1685354221269 2023-05-29 09:57:01,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:01,292 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:01,293 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=992e98307828bc5a28731c7cdf1f58a7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,293 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354221293"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354221293"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354221293"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354221293"}]},"ts":"1685354221293"} 2023-05-29 09:57:01,298 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 09:57:01,298 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 992e98307828bc5a28731c7cdf1f58a7, server=jenkins-hbase4.apache.org,44395,1685354220319 in 179 msec 2023-05-29 09:57:01,301 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 09:57:01,301 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=992e98307828bc5a28731c7cdf1f58a7, ASSIGN in 339 msec 2023-05-29 09:57:01,302 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:57:01,302 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354221302"}]},"ts":"1685354221302"} 2023-05-29 09:57:01,304 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 09:57:01,307 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:57:01,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 393 msec 2023-05-29 09:57:01,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 09:57:01,317 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:01,317 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:01,322 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 09:57:01,332 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:01,336 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-29 09:57:01,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 09:57:01,352 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:01,356 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-05-29 09:57:01,373 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 09:57:01,376 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.028sec 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38697,1685354220249-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 09:57:01,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38697,1685354220249-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 09:57:01,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 09:57:01,441 DEBUG [Listener at localhost/42131] zookeeper.ReadOnlyZKClient(139): Connect 0x65ab8d8a to 127.0.0.1:58162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:01,446 DEBUG [Listener at localhost/42131] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68667cd3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:01,448 DEBUG [hconnection-0x629c0801-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:57:01,450 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:57:01,452 INFO [Listener at localhost/42131] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:01,453 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:01,456 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 09:57:01,456 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:01,457 INFO [Listener at localhost/42131] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 09:57:01,469 INFO [Listener at localhost/42131] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:57:01,469 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:01,469 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:01,470 INFO [Listener at localhost/42131] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:57:01,470 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:01,470 INFO [Listener at localhost/42131] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:57:01,470 INFO [Listener at localhost/42131] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:57:01,472 INFO [Listener at localhost/42131] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37367 2023-05-29 09:57:01,472 INFO [Listener at localhost/42131] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:57:01,474 DEBUG [Listener at localhost/42131] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:57:01,474 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:01,476 INFO [Listener at localhost/42131] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:01,477 INFO [Listener at localhost/42131] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37367 connecting to ZooKeeper ensemble=127.0.0.1:58162 2023-05-29 09:57:01,480 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:373670x0, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:57:01,482 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37367-0x100765f0db70005 connected 2023-05-29 09:57:01,482 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(162): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:57:01,483 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(162): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-29 09:57:01,485 DEBUG [Listener at localhost/42131] zookeeper.ZKUtil(164): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:57:01,485 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37367 2023-05-29 09:57:01,485 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37367 2023-05-29 09:57:01,486 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37367 2023-05-29 09:57:01,486 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37367 2023-05-29 09:57:01,486 DEBUG [Listener at localhost/42131] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37367 2023-05-29 09:57:01,493 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(951): ClusterId : e976b9c3-51b4-465a-9703-6f8de4aa513d 2023-05-29 09:57:01,493 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:57:01,497 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:57:01,497 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:57:01,499 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:57:01,500 DEBUG [RS:1;jenkins-hbase4:37367] zookeeper.ReadOnlyZKClient(139): Connect 0x02010121 to 127.0.0.1:58162 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:01,504 DEBUG [RS:1;jenkins-hbase4:37367] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cc6ff11, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:01,504 DEBUG [RS:1;jenkins-hbase4:37367] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8f96a51, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:57:01,513 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37367 2023-05-29 09:57:01,513 INFO [RS:1;jenkins-hbase4:37367] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:57:01,514 INFO [RS:1;jenkins-hbase4:37367] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:57:01,514 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:57:01,514 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,38697,1685354220249 with isa=jenkins-hbase4.apache.org/172.31.14.131:37367, startcode=1685354221469 2023-05-29 09:57:01,514 DEBUG [RS:1;jenkins-hbase4:37367] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:57:01,517 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47069, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:57:01,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,518 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8 2023-05-29 09:57:01,518 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37205 2023-05-29 09:57:01,518 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:57:01,519 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:01,519 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:01,520 DEBUG [RS:1;jenkins-hbase4:37367] zookeeper.ZKUtil(162): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,520 WARN [RS:1;jenkins-hbase4:37367] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:57:01,520 INFO [RS:1;jenkins-hbase4:37367] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:01,520 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37367,1685354221469] 2023-05-29 09:57:01,520 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1946): logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,520 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,522 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,525 DEBUG [RS:1;jenkins-hbase4:37367] zookeeper.ZKUtil(162): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,525 DEBUG [RS:1;jenkins-hbase4:37367] zookeeper.ZKUtil(162): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,526 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:57:01,526 INFO [RS:1;jenkins-hbase4:37367] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:57:01,528 INFO [RS:1;jenkins-hbase4:37367] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:57:01,529 INFO [RS:1;jenkins-hbase4:37367] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:57:01,529 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,529 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:57:01,530 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,531 DEBUG [RS:1;jenkins-hbase4:37367] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:01,532 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,532 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,532 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,543 INFO [RS:1;jenkins-hbase4:37367] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:57:01,544 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37367,1685354221469-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:01,554 INFO [RS:1;jenkins-hbase4:37367] regionserver.Replication(203): jenkins-hbase4.apache.org,37367,1685354221469 started 2023-05-29 09:57:01,554 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37367,1685354221469, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37367, sessionid=0x100765f0db70005 2023-05-29 09:57:01,554 INFO [Listener at localhost/42131] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase4:37367,5,FailOnTimeoutGroup] 2023-05-29 09:57:01,554 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:57:01,554 INFO [Listener at localhost/42131] wal.TestLogRolling(323): Replication=2 2023-05-29 09:57:01,554 DEBUG [RS:1;jenkins-hbase4:37367] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,555 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37367,1685354221469' 2023-05-29 09:57:01,555 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:57:01,556 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:57:01,557 DEBUG [Listener at localhost/42131] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 09:57:01,557 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:57:01,557 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:57:01,557 DEBUG [RS:1;jenkins-hbase4:37367] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:01,557 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37367,1685354221469' 2023-05-29 09:57:01,557 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:57:01,558 DEBUG [RS:1;jenkins-hbase4:37367] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:57:01,558 DEBUG [RS:1;jenkins-hbase4:37367] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:57:01,559 INFO [RS:1;jenkins-hbase4:37367] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:57:01,559 INFO [RS:1;jenkins-hbase4:37367] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:57:01,560 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47672, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 09:57:01,562 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 09:57:01,562 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 09:57:01,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:01,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-29 09:57:01,566 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:57:01,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-29 09:57:01,567 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:57:01,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:57:01,569 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,569 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5 empty. 2023-05-29 09:57:01,570 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,570 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-29 09:57:01,583 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:01,585 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => 96208f907885521780405e30a7a779f5, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/.tmp 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing 96208f907885521780405e30a7a779f5, disabling compactions & flushes 2023-05-29 09:57:01,593 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. after waiting 0 ms 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,593 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,593 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:01,596 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:57:01,598 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685354221598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354221598"}]},"ts":"1685354221598"} 2023-05-29 09:57:01,600 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:57:01,601 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:57:01,601 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354221601"}]},"ts":"1685354221601"} 2023-05-29 09:57:01,603 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-29 09:57:01,611 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-05-29 09:57:01,613 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-29 09:57:01,613 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-29 09:57:01,613 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-29 09:57:01,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=96208f907885521780405e30a7a779f5, ASSIGN}] 2023-05-29 09:57:01,615 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=96208f907885521780405e30a7a779f5, ASSIGN 2023-05-29 09:57:01,616 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=96208f907885521780405e30a7a779f5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44395,1685354220319; forceNewPlan=false, retain=false 2023-05-29 09:57:01,661 INFO [RS:1;jenkins-hbase4:37367] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37367%2C1685354221469, suffix=, logDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,37367,1685354221469, archiveDir=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/oldWALs, maxLogs=32 2023-05-29 09:57:01,674 INFO [RS:1;jenkins-hbase4:37367] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,37367,1685354221469/jenkins-hbase4.apache.org%2C37367%2C1685354221469.1685354221663 2023-05-29 09:57:01,674 DEBUG [RS:1;jenkins-hbase4:37367] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK], DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]] 2023-05-29 09:57:01,769 INFO [jenkins-hbase4:38697] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-29 09:57:01,770 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=96208f907885521780405e30a7a779f5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,770 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685354221769"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354221769"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354221769"}]},"ts":"1685354221769"} 2023-05-29 09:57:01,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 96208f907885521780405e30a7a779f5, server=jenkins-hbase4.apache.org,44395,1685354220319}] 2023-05-29 09:57:01,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 96208f907885521780405e30a7a779f5, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:01,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath 96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:01,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,934 INFO [StoreOpener-96208f907885521780405e30a7a779f5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,936 DEBUG [StoreOpener-96208f907885521780405e30a7a779f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info 2023-05-29 09:57:01,936 DEBUG [StoreOpener-96208f907885521780405e30a7a779f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info 2023-05-29 09:57:01,936 INFO [StoreOpener-96208f907885521780405e30a7a779f5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 96208f907885521780405e30a7a779f5 columnFamilyName info 2023-05-29 09:57:01,937 INFO [StoreOpener-96208f907885521780405e30a7a779f5-1] regionserver.HStore(310): Store=96208f907885521780405e30a7a779f5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:01,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 96208f907885521780405e30a7a779f5 2023-05-29 09:57:01,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:01,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 96208f907885521780405e30a7a779f5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=825418, jitterRate=0.04957430064678192}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:01,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:01,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5., pid=11, masterSystemTime=1685354221925 2023-05-29 09:57:01,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:01,950 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=96208f907885521780405e30a7a779f5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:01,951 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1685354221950"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354221950"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354221950"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354221950"}]},"ts":"1685354221950"} 2023-05-29 09:57:01,955 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 09:57:01,956 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 96208f907885521780405e30a7a779f5, server=jenkins-hbase4.apache.org,44395,1685354220319 in 181 msec 2023-05-29 09:57:01,958 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 09:57:01,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=96208f907885521780405e30a7a779f5, ASSIGN in 343 msec 2023-05-29 09:57:01,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:57:01,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354221960"}]},"ts":"1685354221960"} 2023-05-29 09:57:01,961 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-29 09:57:01,964 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:57:01,966 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 402 msec 2023-05-29 09:57:04,326 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 09:57:06,578 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 09:57:06,579 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 09:57:06,579 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-29 09:57:11,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:57:11,569 INFO [Listener at localhost/42131] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-29 09:57:11,572 DEBUG [Listener at localhost/42131] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-29 09:57:11,572 DEBUG [Listener at localhost/42131] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:11,586 WARN [Listener at localhost/42131] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:11,588 WARN [Listener at localhost/42131] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:11,590 INFO [Listener at localhost/42131] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:11,594 INFO [Listener at localhost/42131] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_36119_datanode____.fyxu83/webapp 2023-05-29 09:57:11,684 INFO [Listener at localhost/42131] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36119 2023-05-29 09:57:11,694 WARN [Listener at localhost/39377] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:11,718 WARN [Listener at localhost/39377] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:11,720 WARN [Listener at localhost/39377] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:11,721 INFO [Listener at localhost/39377] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:11,727 INFO [Listener at localhost/39377] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_46469_datanode____.vmmy5g/webapp 2023-05-29 09:57:11,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfeeaec58ae9075dd: Processing first storage report for DS-632648df-7751-44b5-b10a-b5b0d522a3be from datanode eb68e000-7e57-4393-b5ba-0fe827c716dc 2023-05-29 09:57:11,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfeeaec58ae9075dd: from storage DS-632648df-7751-44b5-b10a-b5b0d522a3be node DatanodeRegistration(127.0.0.1:38747, datanodeUuid=eb68e000-7e57-4393-b5ba-0fe827c716dc, infoPort=41935, infoSecurePort=0, ipcPort=39377, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:11,801 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfeeaec58ae9075dd: Processing first storage report for DS-15f59f4b-3ce3-4cdf-9cf2-0fe00da596cc from datanode eb68e000-7e57-4393-b5ba-0fe827c716dc 2023-05-29 09:57:11,801 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfeeaec58ae9075dd: from storage DS-15f59f4b-3ce3-4cdf-9cf2-0fe00da596cc node DatanodeRegistration(127.0.0.1:38747, datanodeUuid=eb68e000-7e57-4393-b5ba-0fe827c716dc, infoPort=41935, infoSecurePort=0, ipcPort=39377, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:11,832 INFO [Listener at localhost/39377] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46469 2023-05-29 09:57:11,841 WARN [Listener at localhost/40167] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:11,857 WARN [Listener at localhost/40167] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:11,859 WARN [Listener at localhost/40167] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:11,861 INFO [Listener at localhost/40167] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:11,865 INFO [Listener at localhost/40167] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_39743_datanode____.tqkv2h/webapp 2023-05-29 09:57:11,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b29ce44636730e: Processing first storage report for DS-e2be6146-16f0-4920-a1cf-8167c42f40da from datanode aeae091d-5f3f-40d0-bf4a-77709011ea35 2023-05-29 09:57:11,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b29ce44636730e: from storage DS-e2be6146-16f0-4920-a1cf-8167c42f40da node DatanodeRegistration(127.0.0.1:45001, datanodeUuid=aeae091d-5f3f-40d0-bf4a-77709011ea35, infoPort=36999, infoSecurePort=0, ipcPort=40167, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:11,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b29ce44636730e: Processing first storage report for DS-f9571ea3-eec2-4b55-a833-f7218b539ff5 from datanode aeae091d-5f3f-40d0-bf4a-77709011ea35 2023-05-29 09:57:11,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b29ce44636730e: from storage DS-f9571ea3-eec2-4b55-a833-f7218b539ff5 node DatanodeRegistration(127.0.0.1:45001, datanodeUuid=aeae091d-5f3f-40d0-bf4a-77709011ea35, infoPort=36999, infoSecurePort=0, ipcPort=40167, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:11,969 INFO [Listener at localhost/40167] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39743 2023-05-29 09:57:11,979 WARN [Listener at localhost/36623] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:12,078 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3cc20734cb42ca9: Processing first storage report for DS-afc682ef-58b5-4780-b72a-93bdfa947bcb from datanode a563df28-2885-4ba9-ab53-1f4e2371348f 2023-05-29 09:57:12,078 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3cc20734cb42ca9: from storage DS-afc682ef-58b5-4780-b72a-93bdfa947bcb node DatanodeRegistration(127.0.0.1:38121, datanodeUuid=a563df28-2885-4ba9-ab53-1f4e2371348f, infoPort=40823, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:12,079 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf3cc20734cb42ca9: Processing first storage report for DS-44469f84-6a6f-4f19-b5a4-1e9fc8200023 from datanode a563df28-2885-4ba9-ab53-1f4e2371348f 2023-05-29 09:57:12,079 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf3cc20734cb42ca9: from storage DS-44469f84-6a6f-4f19-b5a4-1e9fc8200023 node DatanodeRegistration(127.0.0.1:38121, datanodeUuid=a563df28-2885-4ba9-ab53-1f4e2371348f, infoPort=40823, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:12,086 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:12,107 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 09:57:12,107 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,107 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,115 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354220715 block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]) is bad. 2023-05-29 09:57:12,121 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 09:57:12,122 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354220408 block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]) is bad. 2023-05-29 09:57:12,123 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36869]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,122 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354220858.meta block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK], DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]) is bad. 2023-05-29 09:57:12,122 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36869]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,122 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,37367,1685354221469/jenkins-hbase4.apache.org%2C37367%2C1685354221469.1685354221663 block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK], DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK]) is bad. 2023-05-29 09:57:12,130 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:12,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:52392 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52392 dst: /127.0.0.1:37199 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,133 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-204167237_17 at /127.0.0.1:52444 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52444 dst: /127.0.0.1:37199 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,134 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:52356 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52356 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:37199 remote=/127.0.0.1:52356]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,135 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37199]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,136 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:45422 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36869:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45422 dst: /127.0.0.1:36869 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,137 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:52390 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52390 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:37199 remote=/127.0.0.1:52390]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,143 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37199]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,146 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:45452 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:36869:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45452 dst: /127.0.0.1:36869 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,200 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid 99ce3b19-f876-40a5-a16a-a974d4c7db06) service to localhost/127.0.0.1:37205 2023-05-29 09:57:12,201 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data3/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:12,201 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data4/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:12,234 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:45460 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36869:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45460 dst: /127.0.0.1:36869 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,235 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-204167237_17 at /127.0.0.1:45498 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:36869:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45498 dst: /127.0.0.1:36869 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,237 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:12,237 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,237 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,238 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,238 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:12,246 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:12,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:50390 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50390 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:50402 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50402 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,352 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:57:12,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:50392 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50392 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,351 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-204167237_17 at /127.0.0.1:50414 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:37199:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50414 dst: /127.0.0.1:37199 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:12,353 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid b4f5e773-d571-460e-8a43-c9ea44608b27) service to localhost/127.0.0.1:37205 2023-05-29 09:57:12,355 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data1/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:12,355 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data2/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:12,361 WARN [RS:0;jenkins-hbase4:44395.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:12,361 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44395%2C1685354220319:(num 1685354220715) roll requested 2023-05-29 09:57:12,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44395] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:12,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44395] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:44890 deadline: 1685354242360, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-29 09:57:12,372 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-29 09:57:12,372 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354220715 with entries=4, filesize=983 B; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 2023-05-29 09:57:12,376 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK], DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]] 2023-05-29 09:57:12,376 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:12,376 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354220715 is not closed yet, will try archiving it next time 2023-05-29 09:57:12,377 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354220715; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:24,462 INFO [Listener at localhost/36623] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 2023-05-29 09:57:24,463 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:24,464 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:57:24,465 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 block BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK], DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK]) is bad. 2023-05-29 09:57:24,468 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:24,471 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:32840 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:45001:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:32840 dst: /127.0.0.1:45001 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:45001 remote=/127.0.0.1:32840]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:24,472 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45001]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:24,473 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:53406 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:38747:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53406 dst: /127.0.0.1:38747 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:24,574 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:57:24,575 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid eb68e000-7e57-4393-b5ba-0fe827c716dc) service to localhost/127.0.0.1:37205 2023-05-29 09:57:24,575 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data5/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:24,576 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data6/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:24,581 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]] 2023-05-29 09:57:24,581 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]] 2023-05-29 09:57:24,581 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44395%2C1685354220319:(num 1685354232361) roll requested 2023-05-29 09:57:24,587 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49756 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741840_1021 to mirror 127.0.0.1:37199: java.net.ConnectException: Connection refused 2023-05-29 09:57:24,587 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741840_1021 2023-05-29 09:57:24,587 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49756 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49756 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:24,590 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK] 2023-05-29 09:57:24,595 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49766 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741841_1022 to mirror 127.0.0.1:36869: java.net.ConnectException: Connection refused 2023-05-29 09:57:24,595 WARN [Thread-640] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741841_1022 2023-05-29 09:57:24,595 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49766 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49766 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:24,595 WARN [Thread-640] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] 2023-05-29 09:57:24,603 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354244581 2023-05-29 09:57:24,603 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK], DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]] 2023-05-29 09:57:24,603 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 is not closed yet, will try archiving it next time 2023-05-29 09:57:26,960 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@2b4c556a] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:45001, datanodeUuid=aeae091d-5f3f-40d0-bf4a-77709011ea35, infoPort=36999, infoSecurePort=0, ipcPort=40167, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741839_1020 to 127.0.0.1:36869 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,586 WARN [Listener at localhost/36623] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:28,588 WARN [ResponseProcessor for block BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023 java.io.IOException: Bad response ERROR for BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023 from datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 09:57:28,589 WARN [DataStreamer for file /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354244581 block BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023] hdfs.DataStreamer(1548): Error Recovery for BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK], DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK]) is bad. 2023-05-29 09:57:28,589 WARN [PacketResponder: BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45001]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,589 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49782 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49782 dst: /127.0.0.1:38121 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,592 INFO [Listener at localhost/36623] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:28,696 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:57294 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:45001:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57294 dst: /127.0.0.1:45001 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,699 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:57:28,699 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid aeae091d-5f3f-40d0-bf4a-77709011ea35) service to localhost/127.0.0.1:37205 2023-05-29 09:57:28,700 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data7/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:28,700 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data8/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:28,705 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,705 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,705 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44395%2C1685354220319:(num 1685354244581) roll requested 2023-05-29 09:57:28,709 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49804 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741843_1025]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741843_1025 to mirror 127.0.0.1:45001: java.net.ConnectException: Connection refused 2023-05-29 09:57:28,709 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741843_1025 2023-05-29 09:57:28,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44395] regionserver.HRegion(9158): Flush requested on 96208f907885521780405e30a7a779f5 2023-05-29 09:57:28,710 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49804 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741843_1025]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49804 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,711 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 96208f907885521780405e30a7a779f5 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:57:28,711 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:28,713 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741844_1026 2023-05-29 09:57:28,713 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:28,715 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741845_1027 2023-05-29 09:57:28,715 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK] 2023-05-29 09:57:28,718 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49806 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741846_1028 to mirror 127.0.0.1:36869: java.net.ConnectException: Connection refused 2023-05-29 09:57:28,718 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741846_1028 2023-05-29 09:57:28,718 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49806 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49806 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,719 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] 2023-05-29 09:57:28,719 WARN [IPC Server handler 0 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 09:57:28,720 WARN [IPC Server handler 0 on default port 37205] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 09:57:28,720 WARN [IPC Server handler 0 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 09:57:28,721 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49822 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741847_1029 to mirror 127.0.0.1:38747: java.net.ConnectException: Connection refused 2023-05-29 09:57:28,721 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741847_1029 2023-05-29 09:57:28,721 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49822 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49822 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,721 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:28,723 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741849_1031 2023-05-29 09:57:28,724 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:28,724 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354244581 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248705 2023-05-29 09:57:28,724 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,725 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354244581 is not closed yet, will try archiving it next time 2023-05-29 09:57:28,726 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741850_1032 2023-05-29 09:57:28,726 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK] 2023-05-29 09:57:28,728 WARN [Thread-656] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741851_1033 2023-05-29 09:57:28,728 WARN [Thread-656] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] 2023-05-29 09:57:28,729 WARN [IPC Server handler 3 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 09:57:28,729 WARN [IPC Server handler 3 on default port 37205] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 09:57:28,729 WARN [IPC Server handler 3 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 09:57:28,926 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,926 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,926 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44395%2C1685354220319:(num 1685354248705) roll requested 2023-05-29 09:57:28,929 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741853_1035 2023-05-29 09:57:28,929 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:28,932 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49848 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741854_1036]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741854_1036 to mirror 127.0.0.1:45001: java.net.ConnectException: Connection refused 2023-05-29 09:57:28,932 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741854_1036 2023-05-29 09:57:28,932 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49848 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741854_1036]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49848 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,932 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:28,933 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741855_1037 2023-05-29 09:57:28,934 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:36869,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK] 2023-05-29 09:57:28,936 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49852 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741856_1038]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741856_1038 to mirror 127.0.0.1:37199: java.net.ConnectException: Connection refused 2023-05-29 09:57:28,936 WARN [Thread-665] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741856_1038 2023-05-29 09:57:28,936 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:49852 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741856_1038]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:49852 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:28,937 WARN [Thread-665] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK] 2023-05-29 09:57:28,937 WARN [IPC Server handler 2 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-29 09:57:28,937 WARN [IPC Server handler 2 on default port 37205] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-29 09:57:28,937 WARN [IPC Server handler 2 on default port 37205] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-29 09:57:28,945 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248705 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248926 2023-05-29 09:57:28,945 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:28,945 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248705 is not closed yet, will try archiving it next time 2023-05-29 09:57:29,129 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-29 09:57:29,137 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/.tmp/info/9870941cd4b74c7dbac0f81548cdae21 2023-05-29 09:57:29,146 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/.tmp/info/9870941cd4b74c7dbac0f81548cdae21 as hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info/9870941cd4b74c7dbac0f81548cdae21 2023-05-29 09:57:29,153 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info/9870941cd4b74c7dbac0f81548cdae21, entries=5, sequenceid=12, filesize=10.0 K 2023-05-29 09:57:29,154 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for 96208f907885521780405e30a7a779f5 in 443ms, sequenceid=12, compaction requested=false 2023-05-29 09:57:29,154 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:29,335 WARN [Listener at localhost/36623] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:29,337 WARN [Listener at localhost/36623] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:29,339 INFO [Listener at localhost/36623] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:29,344 INFO [Listener at localhost/36623] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/java.io.tmpdir/Jetty_localhost_40257_datanode____.crmsw1/webapp 2023-05-29 09:57:29,349 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 to hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/oldWALs/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354232361 2023-05-29 09:57:29,434 INFO [Listener at localhost/36623] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40257 2023-05-29 09:57:29,442 WARN [Listener at localhost/38141] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:29,543 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x625eb09f916ae639: Processing first storage report for DS-45ed997c-9412-40aa-9d81-5dca286cb8c2 from datanode 99ce3b19-f876-40a5-a16a-a974d4c7db06 2023-05-29 09:57:29,544 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x625eb09f916ae639: from storage DS-45ed997c-9412-40aa-9d81-5dca286cb8c2 node DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 7, hasStaleStorage: false, processing time: 2 msecs, invalidatedBlocks: 0 2023-05-29 09:57:29,545 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x625eb09f916ae639: Processing first storage report for DS-7de774b4-ac3a-463c-accc-d6f469853008 from datanode 99ce3b19-f876-40a5-a16a-a974d4c7db06 2023-05-29 09:57:29,545 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x625eb09f916ae639: from storage DS-7de774b4-ac3a-463c-accc-d6f469853008 node DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:30,076 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@4b1612c3] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38121, datanodeUuid=a563df28-2885-4ba9-ab53-1f4e2371348f, infoPort=40823, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741842_1024 to 127.0.0.1:38747 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:30,076 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@38786fdb] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:38121, datanodeUuid=a563df28-2885-4ba9-ab53-1f4e2371348f, infoPort=40823, infoSecurePort=0, ipcPort=36623, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741852_1034 to 127.0.0.1:45001 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:30,486 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:30,487 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C38697%2C1685354220249:(num 1685354220408) roll requested 2023-05-29 09:57:30,491 WARN [Thread-705] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741858_1040 2023-05-29 09:57:30,491 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:30,492 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:30,493 WARN [Thread-705] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK] 2023-05-29 09:57:30,495 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:40316 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741859_1041]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741859_1041 to mirror 127.0.0.1:38747: java.net.ConnectException: Connection refused 2023-05-29 09:57:30,495 WARN [Thread-705] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741859_1041 2023-05-29 09:57:30,495 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:40316 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741859_1041]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:40316 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:30,496 WARN [Thread-705] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:30,497 WARN [Thread-705] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741860_1042 2023-05-29 09:57:30,497 WARN [Thread-705] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:30,503 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-29 09:57:30,503 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354220408 with entries=88, filesize=43.71 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354250487 2023-05-29 09:57:30,503 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33029,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:30,504 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354220408 is not closed yet, will try archiving it next time 2023-05-29 09:57:30,504 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:30,504 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249/jenkins-hbase4.apache.org%2C38697%2C1685354220249.1685354220408; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:36,543 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5b9a99db] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741837_1013 to 127.0.0.1:38747 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:36,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@15c4eed0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741835_1011 to 127.0.0.1:38747 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:37,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3ea5d7c2] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741827_1003 to 127.0.0.1:45001 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:39,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3fe0f1d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741826_1002 to 127.0.0.1:38747 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:42,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@1440406d] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741825_1001 to 127.0.0.1:45001 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:42,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@44b8ee0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741836_1012 to 127.0.0.1:38747 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:43,544 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@435e6940] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33029, datanodeUuid=99ce3b19-f876-40a5-a16a-a974d4c7db06, infoPort=34343, infoSecurePort=0, ipcPort=38141, storageInfo=lv=-57;cid=testClusterID;nsid=1437127941;c=1685354219631):Failed to transfer BP-1579005971-172.31.14.131-1685354219631:blk_1073741834_1010 to 127.0.0.1:45001 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:48,008 INFO [Listener at localhost/38141] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248926 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354267994 2023-05-29 09:57:48,008 DEBUG [Listener at localhost/38141] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33029,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:48,008 DEBUG [Listener at localhost/38141] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.1685354248926 is not closed yet, will try archiving it next time 2023-05-29 09:57:48,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44395] regionserver.HRegion(9158): Flush requested on 96208f907885521780405e30a7a779f5 2023-05-29 09:57:48,013 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 96208f907885521780405e30a7a779f5 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-29 09:57:48,014 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-29 09:57:48,019 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741863_1045 2023-05-29 09:57:48,020 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:48,023 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:58398 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741864_1046]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741864_1046 to mirror 127.0.0.1:38747: java.net.ConnectException: Connection refused 2023-05-29 09:57:48,023 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741864_1046 2023-05-29 09:57:48,024 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1285951927_17 at /127.0.0.1:58398 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741864_1046]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58398 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:48,024 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:48,027 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 09:57:48,027 INFO [Listener at localhost/38141] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 09:57:48,027 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65ab8d8a to 127.0.0.1:58162 2023-05-29 09:57:48,027 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,028 DEBUG [Listener at localhost/38141] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 09:57:48,028 DEBUG [Listener at localhost/38141] util.JVMClusterUtil(257): Found active master hash=1013806567, stopped=false 2023-05-29 09:57:48,028 INFO [Listener at localhost/38141] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:48,030 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:48,030 INFO [Listener at localhost/38141] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 09:57:48,030 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:48,030 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:48,031 DEBUG [Listener at localhost/38141] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f6c4cf2 to 127.0.0.1:58162 2023-05-29 09:57:48,031 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:48,031 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:48,031 DEBUG [Listener at localhost/38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,032 INFO [Listener at localhost/38141] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,44395,1685354220319' ***** 2023-05-29 09:57:48,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:48,032 INFO [Listener at localhost/38141] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 09:57:48,032 INFO [Listener at localhost/38141] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37367,1685354221469' ***** 2023-05-29 09:57:48,032 INFO [RS:0;jenkins-hbase4:44395] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 09:57:48,032 INFO [Listener at localhost/38141] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 09:57:48,032 INFO [RS:1;jenkins-hbase4:37367] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 09:57:48,033 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 09:57:48,034 INFO [RS:1;jenkins-hbase4:37367] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 09:57:48,035 INFO [RS:1;jenkins-hbase4:37367] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 09:57:48,035 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:48,035 DEBUG [RS:1;jenkins-hbase4:37367] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02010121 to 127.0.0.1:58162 2023-05-29 09:57:48,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:48,035 DEBUG [RS:1;jenkins-hbase4:37367] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,035 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37367,1685354221469; all regions closed. 2023-05-29 09:57:48,039 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:48,039 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/.tmp/info/7454289f4d914cb1b8c5192e874d8b9b 2023-05-29 09:57:48,040 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,041 ERROR [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... 2023-05-29 09:57:48,041 DEBUG [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,041 DEBUG [RS:1;jenkins-hbase4:37367] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,042 INFO [RS:1;jenkins-hbase4:37367] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:57:48,042 INFO [RS:1;jenkins-hbase4:37367] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 09:57:48,042 INFO [RS:1;jenkins-hbase4:37367] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 09:57:48,042 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:57:48,042 INFO [RS:1;jenkins-hbase4:37367] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 09:57:48,042 INFO [RS:1;jenkins-hbase4:37367] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 09:57:48,043 INFO [RS:1;jenkins-hbase4:37367] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37367 2023-05-29 09:57:48,047 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:48,047 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:48,047 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:48,047 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37367,1685354221469 2023-05-29 09:57:48,048 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:48,049 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37367,1685354221469] 2023-05-29 09:57:48,049 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37367,1685354221469; numProcessing=1 2023-05-29 09:57:48,052 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37367,1685354221469 already deleted, retry=false 2023-05-29 09:57:48,052 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37367,1685354221469 expired; onlineServers=1 2023-05-29 09:57:48,053 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/.tmp/info/7454289f4d914cb1b8c5192e874d8b9b as hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info/7454289f4d914cb1b8c5192e874d8b9b 2023-05-29 09:57:48,059 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info/7454289f4d914cb1b8c5192e874d8b9b, entries=8, sequenceid=25, filesize=13.2 K 2023-05-29 09:57:48,060 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for 96208f907885521780405e30a7a779f5 in 47ms, sequenceid=25, compaction requested=false 2023-05-29 09:57:48,061 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:48,061 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-29 09:57:48,061 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:57:48,061 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/default/TestLogRolling-testLogRollOnDatanodeDeath/96208f907885521780405e30a7a779f5/info/7454289f4d914cb1b8c5192e874d8b9b because midkey is the same as first or last row 2023-05-29 09:57:48,061 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 09:57:48,061 INFO [RS:0;jenkins-hbase4:44395] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 09:57:48,061 INFO [RS:0;jenkins-hbase4:44395] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 09:57:48,061 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(3303): Received CLOSE for 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:48,061 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(3303): Received CLOSE for 96208f907885521780405e30a7a779f5 2023-05-29 09:57:48,061 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:48,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 992e98307828bc5a28731c7cdf1f58a7, disabling compactions & flushes 2023-05-29 09:57:48,062 DEBUG [RS:0;jenkins-hbase4:44395] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4839c92c to 127.0.0.1:58162 2023-05-29 09:57:48,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:48,062 DEBUG [RS:0;jenkins-hbase4:44395] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:48,062 INFO [RS:0;jenkins-hbase4:44395] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 09:57:48,062 INFO [RS:0;jenkins-hbase4:44395] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 09:57:48,062 INFO [RS:0;jenkins-hbase4:44395] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 09:57:48,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. after waiting 0 ms 2023-05-29 09:57:48,062 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:57:48,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:48,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 992e98307828bc5a28731c7cdf1f58a7 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 09:57:48,063 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 09:57:48,063 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1478): Online Regions={992e98307828bc5a28731c7cdf1f58a7=hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7., 1588230740=hbase:meta,,1.1588230740, 96208f907885521780405e30a7a779f5=TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5.} 2023-05-29 09:57:48,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:57:48,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:57:48,063 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1504): Waiting on 1588230740, 96208f907885521780405e30a7a779f5, 992e98307828bc5a28731c7cdf1f58a7 2023-05-29 09:57:48,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:57:48,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:57:48,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:57:48,063 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.92 KB heapSize=5.45 KB 2023-05-29 09:57:48,064 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,064 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta:.meta(num 1685354220858) roll requested 2023-05-29 09:57:48,064 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:57:48,065 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,44395,1685354220319: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,065 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-29 09:57:48,068 WARN [Thread-744] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741866_1048 2023-05-29 09:57:48,069 WARN [Thread-744] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:48,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-29 09:57:48,070 WARN [Thread-745] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741867_1049 2023-05-29 09:57:48,070 WARN [Thread-744] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741868_1050 2023-05-29 09:57:48,070 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-29 09:57:48,070 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-29 09:57:48,071 WARN [Thread-745] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:48,071 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-29 09:57:48,071 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1137704960, "init": 513802240, "max": 2051014656, "used": 670076296 }, "NonHeapMemoryUsage": { "committed": 134045696, "init": 2555904, "max": -1, "used": 131644096 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-29 09:57:48,071 WARN [Thread-744] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:48,078 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38697] master.MasterRpcServices(609): jenkins-hbase4.apache.org,44395,1685354220319 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,44395,1685354220319: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,084 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-29 09:57:48,084 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354220858.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354268064.meta 2023-05-29 09:57:48,085 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33029,DS-45ed997c-9412-40aa-9d81-5dca286cb8c2,DISK], DatanodeInfoWithStorage[127.0.0.1:38121,DS-afc682ef-58b5-4780-b72a-93bdfa947bcb,DISK]] 2023-05-29 09:57:48,085 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354220858.meta is not closed yet, will try archiving it next time 2023-05-29 09:57:48,085 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,085 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319/jenkins-hbase4.apache.org%2C44395%2C1685354220319.meta.1685354220858.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:37199,DS-0a05775c-9b96-454e-b909-50ff6f0a6a71,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:57:48,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/.tmp/info/02ee2e91708c4edca96bb0b210050f8e 2023-05-29 09:57:48,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/.tmp/info/02ee2e91708c4edca96bb0b210050f8e as hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/info/02ee2e91708c4edca96bb0b210050f8e 2023-05-29 09:57:48,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/info/02ee2e91708c4edca96bb0b210050f8e, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 09:57:48,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 992e98307828bc5a28731c7cdf1f58a7 in 37ms, sequenceid=6, compaction requested=false 2023-05-29 09:57:48,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/data/hbase/namespace/992e98307828bc5a28731c7cdf1f58a7/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 09:57:48,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 992e98307828bc5a28731c7cdf1f58a7: 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685354220914.992e98307828bc5a28731c7cdf1f58a7. 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 96208f907885521780405e30a7a779f5, disabling compactions & flushes 2023-05-29 09:57:48,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. after waiting 0 ms 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:48,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,263 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:57:48,264 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(3303): Received CLOSE for 96208f907885521780405e30a7a779f5 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:57:48,264 DEBUG [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1504): Waiting on 1588230740, 96208f907885521780405e30a7a779f5 2023-05-29 09:57:48,264 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 96208f907885521780405e30a7a779f5, disabling compactions & flushes 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:57:48,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. after waiting 0 ms 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 96208f907885521780405e30a7a779f5: 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-29 09:57:48,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1685354221562.96208f907885521780405e30a7a779f5. 2023-05-29 09:57:48,330 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,330 INFO [RS:1;jenkins-hbase4:37367] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37367,1685354221469; zookeeper connection closed. 2023-05-29 09:57:48,330 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:37367-0x100765f0db70005, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,331 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3d0f8486] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3d0f8486 2023-05-29 09:57:48,464 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-29 09:57:48,464 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44395,1685354220319; all regions closed. 2023-05-29 09:57:48,465 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:48,469 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/WALs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:48,473 DEBUG [RS:0;jenkins-hbase4:44395] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,473 INFO [RS:0;jenkins-hbase4:44395] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:57:48,473 INFO [RS:0;jenkins-hbase4:44395] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 09:57:48,473 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:57:48,474 INFO [RS:0;jenkins-hbase4:44395] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44395 2023-05-29 09:57:48,476 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44395,1685354220319 2023-05-29 09:57:48,476 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:48,476 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44395,1685354220319] 2023-05-29 09:57:48,477 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44395,1685354220319; numProcessing=2 2023-05-29 09:57:48,479 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44395,1685354220319 already deleted, retry=false 2023-05-29 09:57:48,479 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44395,1685354220319 expired; onlineServers=0 2023-05-29 09:57:48,479 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,38697,1685354220249' ***** 2023-05-29 09:57:48,479 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 09:57:48,479 DEBUG [M:0;jenkins-hbase4:38697] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3852e6b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:57:48,479 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:48,479 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38697,1685354220249; all regions closed. 2023-05-29 09:57:48,480 DEBUG [M:0;jenkins-hbase4:38697] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:57:48,480 DEBUG [M:0;jenkins-hbase4:38697] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 09:57:48,480 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 09:57:48,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354220487] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354220487,5,FailOnTimeoutGroup] 2023-05-29 09:57:48,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354220487] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354220487,5,FailOnTimeoutGroup] 2023-05-29 09:57:48,480 DEBUG [M:0;jenkins-hbase4:38697] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 09:57:48,481 INFO [M:0;jenkins-hbase4:38697] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 09:57:48,481 INFO [M:0;jenkins-hbase4:38697] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 09:57:48,481 INFO [M:0;jenkins-hbase4:38697] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 09:57:48,481 DEBUG [M:0;jenkins-hbase4:38697] master.HMaster(1512): Stopping service threads 2023-05-29 09:57:48,481 INFO [M:0;jenkins-hbase4:38697] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 09:57:48,482 ERROR [M:0;jenkins-hbase4:38697] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 09:57:48,482 INFO [M:0;jenkins-hbase4:38697] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 09:57:48,482 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 09:57:48,483 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 09:57:48,483 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:48,483 DEBUG [M:0;jenkins-hbase4:38697] zookeeper.ZKUtil(398): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 09:57:48,483 WARN [M:0;jenkins-hbase4:38697] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 09:57:48,483 INFO [M:0;jenkins-hbase4:38697] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 09:57:48,483 INFO [M:0;jenkins-hbase4:38697] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 09:57:48,483 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:57:48,484 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:57:48,484 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:48,484 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:48,484 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:57:48,484 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:48,484 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.08 KB heapSize=45.73 KB 2023-05-29 09:57:48,492 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:33226 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741871_1053]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data4/current]'}, localName='127.0.0.1:33029', datanodeUuid='99ce3b19-f876-40a5-a16a-a974d4c7db06', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741871_1053 to mirror 127.0.0.1:38747: java.net.ConnectException: Connection refused 2023-05-29 09:57:48,492 WARN [Thread-759] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741871_1053 2023-05-29 09:57:48,492 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:33226 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741871_1053]] datanode.DataXceiver(323): 127.0.0.1:33029:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33226 dst: /127.0.0.1:33029 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:48,493 WARN [Thread-759] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:38747,DS-632648df-7751-44b5-b10a-b5b0d522a3be,DISK] 2023-05-29 09:57:48,495 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:58452 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741872_1054]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current]'}, localName='127.0.0.1:38121', datanodeUuid='a563df28-2885-4ba9-ab53-1f4e2371348f', xmitsInProgress=0}:Exception transfering block BP-1579005971-172.31.14.131-1685354219631:blk_1073741872_1054 to mirror 127.0.0.1:45001: java.net.ConnectException: Connection refused 2023-05-29 09:57:48,495 WARN [Thread-759] hdfs.DataStreamer(1658): Abandoning BP-1579005971-172.31.14.131-1685354219631:blk_1073741872_1054 2023-05-29 09:57:48,496 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1977742225_17 at /127.0.0.1:58452 [Receiving block BP-1579005971-172.31.14.131-1685354219631:blk_1073741872_1054]] datanode.DataXceiver(323): 127.0.0.1:38121:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58452 dst: /127.0.0.1:38121 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:57:48,496 WARN [Thread-759] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:45001,DS-e2be6146-16f0-4920-a1cf-8167c42f40da,DISK] 2023-05-29 09:57:48,502 INFO [M:0;jenkins-hbase4:38697] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.08 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/42e31f204dfc44239e3a2971656e5aeb 2023-05-29 09:57:48,507 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/42e31f204dfc44239e3a2971656e5aeb as hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/42e31f204dfc44239e3a2971656e5aeb 2023-05-29 09:57:48,513 INFO [M:0;jenkins-hbase4:38697] regionserver.HStore(1080): Added hdfs://localhost:37205/user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/42e31f204dfc44239e3a2971656e5aeb, entries=11, sequenceid=92, filesize=7.0 K 2023-05-29 09:57:48,514 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegion(2948): Finished flush of dataSize ~38.08 KB/38997, heapSize ~45.72 KB/46816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=92, compaction requested=false 2023-05-29 09:57:48,515 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:48,515 DEBUG [M:0;jenkins-hbase4:38697] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:57:48,515 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/0e3e07de-a595-c5bf-4c1a-46c4c43556a8/MasterData/WALs/jenkins-hbase4.apache.org,38697,1685354220249 2023-05-29 09:57:48,518 INFO [M:0;jenkins-hbase4:38697] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 09:57:48,518 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:57:48,519 INFO [M:0;jenkins-hbase4:38697] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38697 2023-05-29 09:57:48,521 DEBUG [M:0;jenkins-hbase4:38697] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38697,1685354220249 already deleted, retry=false 2023-05-29 09:57:48,589 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:57:48,631 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,631 INFO [M:0;jenkins-hbase4:38697] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38697,1685354220249; zookeeper connection closed. 2023-05-29 09:57:48,631 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): master:38697-0x100765f0db70000, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,731 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,731 INFO [RS:0;jenkins-hbase4:44395] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44395,1685354220319; zookeeper connection closed. 2023-05-29 09:57:48,731 DEBUG [Listener at localhost/42131-EventThread] zookeeper.ZKWatcher(600): regionserver:44395-0x100765f0db70001, quorum=127.0.0.1:58162, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:57:48,732 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@133f4322] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@133f4322 2023-05-29 09:57:48,732 INFO [Listener at localhost/38141] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-29 09:57:48,732 WARN [Listener at localhost/38141] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:48,736 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:48,840 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:57:48,840 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid 99ce3b19-f876-40a5-a16a-a974d4c7db06) service to localhost/127.0.0.1:37205 2023-05-29 09:57:48,841 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data3/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:48,842 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data4/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:48,844 WARN [Listener at localhost/38141] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:57:48,847 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:48,950 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:57:48,950 WARN [BP-1579005971-172.31.14.131-1685354219631 heartbeating to localhost/127.0.0.1:37205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579005971-172.31.14.131-1685354219631 (Datanode Uuid a563df28-2885-4ba9-ab53-1f4e2371348f) service to localhost/127.0.0.1:37205 2023-05-29 09:57:48,951 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data9/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:48,951 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/cluster_a9bb8a38-7acc-71fc-93f8-b4cc372b1653/dfs/data/data10/current/BP-1579005971-172.31.14.131-1685354219631] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:57:48,962 INFO [Listener at localhost/38141] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:57:49,079 INFO [Listener at localhost/38141] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 09:57:49,125 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 09:57:49,138 INFO [Listener at localhost/38141] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=75 (was 52) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37205 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost:37205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38141 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37205 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:37205 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost:37205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=460 (was 436) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=71 (was 94), ProcessCount=167 (was 168), AvailableMemoryMB=3401 (was 3842) 2023-05-29 09:57:49,151 INFO [Listener at localhost/38141] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=75, OpenFileDescriptor=460, MaxFileDescriptor=60000, SystemLoadAverage=71, ProcessCount=166, AvailableMemoryMB=3400 2023-05-29 09:57:49,151 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/hadoop.log.dir so I do NOT create it in target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/aced4540-2d03-8fb3-f693-c7538693e134/hadoop.tmp.dir so I do NOT create it in target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0, deleteOnExit=true 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/test.cache.data in system properties and HBase conf 2023-05-29 09:57:49,152 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 09:57:49,153 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/hadoop.log.dir in system properties and HBase conf 2023-05-29 09:57:49,153 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 09:57:49,153 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 09:57:49,153 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 09:57:49,153 DEBUG [Listener at localhost/38141] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 09:57:49,154 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:57:49,154 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:57:49,154 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 09:57:49,154 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:57:49,154 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 09:57:49,155 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 09:57:49,155 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:57:49,155 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:57:49,155 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 09:57:49,155 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/nfs.dump.dir in system properties and HBase conf 2023-05-29 09:57:49,156 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir in system properties and HBase conf 2023-05-29 09:57:49,156 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:57:49,156 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 09:57:49,156 INFO [Listener at localhost/38141] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 09:57:49,159 WARN [Listener at localhost/38141] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:57:49,191 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:57:49,192 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:57:49,240 WARN [Listener at localhost/38141] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:49,242 INFO [Listener at localhost/38141] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:49,246 INFO [Listener at localhost/38141] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_45869_hdfs____q651tw/webapp 2023-05-29 09:57:49,337 INFO [Listener at localhost/38141] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45869 2023-05-29 09:57:49,339 WARN [Listener at localhost/38141] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:57:49,342 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:57:49,342 WARN [Listener at localhost/38141] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:57:49,378 WARN [Listener at localhost/44117] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:49,388 WARN [Listener at localhost/44117] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:49,391 WARN [Listener at localhost/44117] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:49,392 INFO [Listener at localhost/44117] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:49,397 INFO [Listener at localhost/44117] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_45691_datanode____.um6tik/webapp 2023-05-29 09:57:49,489 INFO [Listener at localhost/44117] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45691 2023-05-29 09:57:49,496 WARN [Listener at localhost/44821] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:49,512 WARN [Listener at localhost/44821] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:57:49,515 WARN [Listener at localhost/44821] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:57:49,517 INFO [Listener at localhost/44821] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:57:49,521 INFO [Listener at localhost/44821] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_40073_datanode____b33y0n/webapp 2023-05-29 09:57:49,534 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:57:49,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f3d4015ec05b984: Processing first storage report for DS-11483388-1616-4a42-8c92-d4a7c05684e0 from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:57:49,597 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f3d4015ec05b984: from storage DS-11483388-1616-4a42-8c92-d4a7c05684e0 node DatanodeRegistration(127.0.0.1:33851, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=42703, infoSecurePort=0, ipcPort=44821, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:49,597 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2f3d4015ec05b984: Processing first storage report for DS-d31b9586-f635-4a6f-bf14-0511700fb98a from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:57:49,597 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2f3d4015ec05b984: from storage DS-d31b9586-f635-4a6f-bf14-0511700fb98a node DatanodeRegistration(127.0.0.1:33851, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=42703, infoSecurePort=0, ipcPort=44821, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 09:57:49,624 INFO [Listener at localhost/44821] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40073 2023-05-29 09:57:49,630 WARN [Listener at localhost/38063] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:57:49,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd8ad7ad8ad123df0: Processing first storage report for DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:57:49,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd8ad7ad8ad123df0: from storage DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 node DatanodeRegistration(127.0.0.1:36191, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=41421, infoSecurePort=0, ipcPort=38063, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:49,714 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd8ad7ad8ad123df0: Processing first storage report for DS-581e91c0-2dfc-4d53-a925-783c14940cc4 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:57:49,714 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd8ad7ad8ad123df0: from storage DS-581e91c0-2dfc-4d53-a925-783c14940cc4 node DatanodeRegistration(127.0.0.1:36191, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=41421, infoSecurePort=0, ipcPort=38063, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:57:49,738 DEBUG [Listener at localhost/38063] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f 2023-05-29 09:57:49,740 INFO [Listener at localhost/38063] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/zookeeper_0, clientPort=50576, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 09:57:49,741 INFO [Listener at localhost/38063] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50576 2023-05-29 09:57:49,742 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,742 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,757 INFO [Listener at localhost/38063] util.FSUtils(471): Created version file at hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d with version=8 2023-05-29 09:57:49,757 INFO [Listener at localhost/38063] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 09:57:49,758 INFO [Listener at localhost/38063] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:57:49,759 INFO [Listener at localhost/38063] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:57:49,760 INFO [Listener at localhost/38063] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35407 2023-05-29 09:57:49,761 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,761 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,762 INFO [Listener at localhost/38063] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35407 connecting to ZooKeeper ensemble=127.0.0.1:50576 2023-05-29 09:57:49,770 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:354070x0, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:57:49,771 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35407-0x100765fcf2f0000 connected 2023-05-29 09:57:49,793 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:57:49,794 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:49,794 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:57:49,795 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35407 2023-05-29 09:57:49,796 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35407 2023-05-29 09:57:49,797 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35407 2023-05-29 09:57:49,799 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35407 2023-05-29 09:57:49,802 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35407 2023-05-29 09:57:49,803 INFO [Listener at localhost/38063] master.HMaster(444): hbase.rootdir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d, hbase.cluster.distributed=false 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:57:49,816 INFO [Listener at localhost/38063] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:57:49,817 INFO [Listener at localhost/38063] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43005 2023-05-29 09:57:49,818 INFO [Listener at localhost/38063] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:57:49,819 DEBUG [Listener at localhost/38063] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:57:49,819 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,820 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,821 INFO [Listener at localhost/38063] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43005 connecting to ZooKeeper ensemble=127.0.0.1:50576 2023-05-29 09:57:49,824 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:430050x0, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:57:49,825 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43005-0x100765fcf2f0001 connected 2023-05-29 09:57:49,825 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:57:49,825 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:57:49,826 DEBUG [Listener at localhost/38063] zookeeper.ZKUtil(164): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:57:49,826 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43005 2023-05-29 09:57:49,826 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43005 2023-05-29 09:57:49,827 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43005 2023-05-29 09:57:49,827 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43005 2023-05-29 09:57:49,827 DEBUG [Listener at localhost/38063] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43005 2023-05-29 09:57:49,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,831 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:57:49,831 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,832 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:57:49,832 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:57:49,832 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:57:49,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35407,1685354269758 from backup master directory 2023-05-29 09:57:49,833 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:57:49,835 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,835 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:57:49,835 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:57:49,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/hbase.id with ID: d959a150-13e3-4911-8e46-6adad1401bf7 2023-05-29 09:57:49,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:49,863 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6eb34b8c to 127.0.0.1:50576 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:49,874 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6583d3aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:49,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:49,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 09:57:49,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:49,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store-tmp 2023-05-29 09:57:49,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:49,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:57:49,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:49,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:49,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:57:49,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:49,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:57:49,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:57:49,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35407%2C1685354269758, suffix=, logDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758, archiveDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/oldWALs, maxLogs=10 2023-05-29 09:57:49,900 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354269891 2023-05-29 09:57:49,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] 2023-05-29 09:57:49,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:49,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:49,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,908 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,909 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 09:57:49,909 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 09:57:49,910 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:49,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:57:49,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:49,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=804640, jitterRate=0.023152977228164673}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:49,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:57:49,920 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 09:57:49,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 09:57:49,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 09:57:49,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 09:57:49,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 09:57:49,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 09:57:49,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 09:57:49,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 09:57:49,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 09:57:49,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 09:57:49,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 09:57:49,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 09:57:49,936 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 09:57:49,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 09:57:49,938 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,938 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 09:57:49,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 09:57:49,940 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 09:57:49,941 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:49,941 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:57:49,941 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,941 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35407,1685354269758, sessionid=0x100765fcf2f0000, setting cluster-up flag (Was=false) 2023-05-29 09:57:49,946 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 09:57:49,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,955 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:49,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 09:57:49,962 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:49,963 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.hbase-snapshot/.tmp 2023-05-29 09:57:49,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:57:49,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:49,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354299971 2023-05-29 09:57:49,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 09:57:49,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 09:57:49,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 09:57:49,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 09:57:49,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 09:57:49,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 09:57:49,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:49,973 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:49,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 09:57:49,973 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 09:57:49,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 09:57:49,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 09:57:49,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 09:57:49,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 09:57:49,974 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354269973,5,FailOnTimeoutGroup] 2023-05-29 09:57:49,974 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354269974,5,FailOnTimeoutGroup] 2023-05-29 09:57:49,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:49,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 09:57:49,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:49,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:49,975 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:49,988 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:49,988 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:49,988 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d 2023-05-29 09:57:49,997 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:49,998 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:57:49,999 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/info 2023-05-29 09:57:50,000 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:57:50,000 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,000 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:57:50,002 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:50,002 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:57:50,003 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,003 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:57:50,004 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/table 2023-05-29 09:57:50,004 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:57:50,005 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,006 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740 2023-05-29 09:57:50,006 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740 2023-05-29 09:57:50,009 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:57:50,012 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:57:50,016 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:50,016 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=872086, jitterRate=0.1089150458574295}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:57:50,016 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:57:50,016 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:57:50,017 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:57:50,017 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:57:50,017 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:57:50,017 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:57:50,017 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:57:50,017 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:57:50,018 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:57:50,018 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 09:57:50,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 09:57:50,020 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 09:57:50,021 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 09:57:50,029 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(951): ClusterId : d959a150-13e3-4911-8e46-6adad1401bf7 2023-05-29 09:57:50,030 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:57:50,032 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:57:50,032 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:57:50,034 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:57:50,036 DEBUG [RS:0;jenkins-hbase4:43005] zookeeper.ReadOnlyZKClient(139): Connect 0x0de1a4d6 to 127.0.0.1:50576 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:50,040 DEBUG [RS:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9de31b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:50,040 DEBUG [RS:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4cdcaeed, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:57:50,052 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43005 2023-05-29 09:57:50,052 INFO [RS:0;jenkins-hbase4:43005] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:57:50,052 INFO [RS:0;jenkins-hbase4:43005] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:57:50,052 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:57:50,053 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,35407,1685354269758 with isa=jenkins-hbase4.apache.org/172.31.14.131:43005, startcode=1685354269815 2023-05-29 09:57:50,053 DEBUG [RS:0;jenkins-hbase4:43005] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:57:50,056 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42883, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:57:50,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,057 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d 2023-05-29 09:57:50,057 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44117 2023-05-29 09:57:50,057 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:57:50,060 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:57:50,060 DEBUG [RS:0;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,060 WARN [RS:0;jenkins-hbase4:43005] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:57:50,060 INFO [RS:0;jenkins-hbase4:43005] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:50,060 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,061 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43005,1685354269815] 2023-05-29 09:57:50,064 DEBUG [RS:0;jenkins-hbase4:43005] zookeeper.ZKUtil(162): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,065 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:57:50,065 INFO [RS:0;jenkins-hbase4:43005] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:57:50,067 INFO [RS:0;jenkins-hbase4:43005] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:57:50,068 INFO [RS:0;jenkins-hbase4:43005] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:57:50,068 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,068 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:57:50,069 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,070 DEBUG [RS:0;jenkins-hbase4:43005] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:57:50,071 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,071 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,071 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,082 INFO [RS:0;jenkins-hbase4:43005] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:57:50,082 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43005,1685354269815-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,092 INFO [RS:0;jenkins-hbase4:43005] regionserver.Replication(203): jenkins-hbase4.apache.org,43005,1685354269815 started 2023-05-29 09:57:50,092 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43005,1685354269815, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43005, sessionid=0x100765fcf2f0001 2023-05-29 09:57:50,092 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:57:50,092 DEBUG [RS:0;jenkins-hbase4:43005] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,092 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43005,1685354269815' 2023-05-29 09:57:50,092 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43005,1685354269815' 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:57:50,093 DEBUG [RS:0;jenkins-hbase4:43005] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:57:50,094 DEBUG [RS:0;jenkins-hbase4:43005] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:57:50,094 INFO [RS:0;jenkins-hbase4:43005] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:57:50,094 INFO [RS:0;jenkins-hbase4:43005] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:57:50,172 DEBUG [jenkins-hbase4:35407] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 09:57:50,173 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43005,1685354269815, state=OPENING 2023-05-29 09:57:50,174 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 09:57:50,175 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:50,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43005,1685354269815}] 2023-05-29 09:57:50,176 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:57:50,196 INFO [RS:0;jenkins-hbase4:43005] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43005%2C1685354269815, suffix=, logDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815, archiveDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/oldWALs, maxLogs=32 2023-05-29 09:57:50,206 INFO [RS:0;jenkins-hbase4:43005] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 2023-05-29 09:57:50,206 DEBUG [RS:0;jenkins-hbase4:43005] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] 2023-05-29 09:57:50,331 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,331 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 09:57:50,334 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59150, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 09:57:50,338 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 09:57:50,338 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:57:50,339 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43005%2C1685354269815.meta, suffix=.meta, logDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815, archiveDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/oldWALs, maxLogs=32 2023-05-29 09:57:50,350 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.meta.1685354270341.meta 2023-05-29 09:57:50,350 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] 2023-05-29 09:57:50,350 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:50,350 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 09:57:50,350 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 09:57:50,350 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 09:57:50,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 09:57:50,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:50,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 09:57:50,351 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 09:57:50,352 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:57:50,353 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/info 2023-05-29 09:57:50,354 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/info 2023-05-29 09:57:50,354 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:57:50,355 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,355 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:57:50,356 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:50,356 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:57:50,356 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:57:50,357 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,357 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:57:50,358 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/table 2023-05-29 09:57:50,358 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740/table 2023-05-29 09:57:50,358 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:57:50,359 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,360 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740 2023-05-29 09:57:50,361 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/meta/1588230740 2023-05-29 09:57:50,363 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:57:50,364 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:57:50,365 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=746145, jitterRate=-0.051227867603302}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:57:50,365 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:57:50,366 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354270331 2023-05-29 09:57:50,370 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 09:57:50,370 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 09:57:50,371 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43005,1685354269815, state=OPEN 2023-05-29 09:57:50,373 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 09:57:50,373 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:57:50,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 09:57:50,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43005,1685354269815 in 197 msec 2023-05-29 09:57:50,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 09:57:50,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 357 msec 2023-05-29 09:57:50,379 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 414 msec 2023-05-29 09:57:50,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354270379, completionTime=-1 2023-05-29 09:57:50,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 09:57:50,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 09:57:50,382 DEBUG [hconnection-0x7f6be210-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:57:50,384 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59154, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:57:50,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 09:57:50,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354330385 2023-05-29 09:57:50,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354390385 2023-05-29 09:57:50,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1685354269758-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1685354269758-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1685354269758-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35407, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 09:57:50,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 09:57:50,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:50,398 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 09:57:50,399 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 09:57:50,400 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:57:50,401 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:57:50,403 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,403 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414 empty. 2023-05-29 09:57:50,404 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,404 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 09:57:50,415 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:50,416 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b319e348cde1fd1d3c095283a666d414, NAME => 'hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp 2023-05-29 09:57:50,423 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:50,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b319e348cde1fd1d3c095283a666d414, disabling compactions & flushes 2023-05-29 09:57:50,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. after waiting 0 ms 2023-05-29 09:57:50,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b319e348cde1fd1d3c095283a666d414: 2023-05-29 09:57:50,426 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:57:50,427 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354270427"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354270427"}]},"ts":"1685354270427"} 2023-05-29 09:57:50,429 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:57:50,430 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:57:50,430 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354270430"}]},"ts":"1685354270430"} 2023-05-29 09:57:50,432 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 09:57:50,439 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b319e348cde1fd1d3c095283a666d414, ASSIGN}] 2023-05-29 09:57:50,441 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b319e348cde1fd1d3c095283a666d414, ASSIGN 2023-05-29 09:57:50,442 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b319e348cde1fd1d3c095283a666d414, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43005,1685354269815; forceNewPlan=false, retain=false 2023-05-29 09:57:50,593 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b319e348cde1fd1d3c095283a666d414, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,594 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354270593"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354270593"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354270593"}]},"ts":"1685354270593"} 2023-05-29 09:57:50,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure b319e348cde1fd1d3c095283a666d414, server=jenkins-hbase4.apache.org,43005,1685354269815}] 2023-05-29 09:57:50,754 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b319e348cde1fd1d3c095283a666d414, NAME => 'hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:50,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:50,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,756 INFO [StoreOpener-b319e348cde1fd1d3c095283a666d414-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,757 DEBUG [StoreOpener-b319e348cde1fd1d3c095283a666d414-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414/info 2023-05-29 09:57:50,757 DEBUG [StoreOpener-b319e348cde1fd1d3c095283a666d414-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414/info 2023-05-29 09:57:50,757 INFO [StoreOpener-b319e348cde1fd1d3c095283a666d414-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b319e348cde1fd1d3c095283a666d414 columnFamilyName info 2023-05-29 09:57:50,758 INFO [StoreOpener-b319e348cde1fd1d3c095283a666d414-1] regionserver.HStore(310): Store=b319e348cde1fd1d3c095283a666d414/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:50,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,761 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b319e348cde1fd1d3c095283a666d414 2023-05-29 09:57:50,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/hbase/namespace/b319e348cde1fd1d3c095283a666d414/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:50,764 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b319e348cde1fd1d3c095283a666d414; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=721198, jitterRate=-0.08295010030269623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:50,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b319e348cde1fd1d3c095283a666d414: 2023-05-29 09:57:50,766 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414., pid=6, masterSystemTime=1685354270750 2023-05-29 09:57:50,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,768 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:57:50,769 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b319e348cde1fd1d3c095283a666d414, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:50,769 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354270769"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354270769"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354270769"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354270769"}]},"ts":"1685354270769"} 2023-05-29 09:57:50,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 09:57:50,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure b319e348cde1fd1d3c095283a666d414, server=jenkins-hbase4.apache.org,43005,1685354269815 in 173 msec 2023-05-29 09:57:50,776 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 09:57:50,776 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b319e348cde1fd1d3c095283a666d414, ASSIGN in 334 msec 2023-05-29 09:57:50,777 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:57:50,777 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354270777"}]},"ts":"1685354270777"} 2023-05-29 09:57:50,779 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 09:57:50,781 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:57:50,783 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 383 msec 2023-05-29 09:57:50,800 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 09:57:50,801 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:50,801 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:50,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 09:57:50,814 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:50,820 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-05-29 09:57:50,828 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 09:57:50,835 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:57:50,839 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-29 09:57:50,854 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 09:57:50,856 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 09:57:50,856 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.021sec 2023-05-29 09:57:50,856 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 09:57:50,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 09:57:50,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 09:57:50,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1685354269758-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 09:57:50,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35407,1685354269758-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 09:57:50,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 09:57:50,929 DEBUG [Listener at localhost/38063] zookeeper.ReadOnlyZKClient(139): Connect 0x1f7802f0 to 127.0.0.1:50576 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:57:50,934 DEBUG [Listener at localhost/38063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a3fd2dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:57:50,936 DEBUG [hconnection-0x3836c0eb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:57:50,938 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59166, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:57:50,939 INFO [Listener at localhost/38063] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:57:50,940 INFO [Listener at localhost/38063] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:57:50,943 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 09:57:50,944 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:57:50,944 INFO [Listener at localhost/38063] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 09:57:50,944 INFO [Listener at localhost/38063] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-29 09:57:50,944 INFO [Listener at localhost/38063] wal.TestLogRolling(432): Replication=2 2023-05-29 09:57:50,946 DEBUG [Listener at localhost/38063] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 09:57:50,949 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 09:57:50,951 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 09:57:50,951 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 09:57:50,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:57:50,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-29 09:57:50,955 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:57:50,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-29 09:57:50,956 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:57:50,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:57:50,957 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:50,958 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5 empty. 2023-05-29 09:57:50,959 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:50,959 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-29 09:57:50,970 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-29 09:57:50,972 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => d323509fbea012a39e976856d72fb9d5, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/.tmp 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing d323509fbea012a39e976856d72fb9d5, disabling compactions & flushes 2023-05-29 09:57:50,980 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. after waiting 0 ms 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:50,980 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:50,980 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for d323509fbea012a39e976856d72fb9d5: 2023-05-29 09:57:50,983 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:57:50,984 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685354270984"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354270984"}]},"ts":"1685354270984"} 2023-05-29 09:57:50,986 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:57:50,987 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:57:50,987 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354270987"}]},"ts":"1685354270987"} 2023-05-29 09:57:50,989 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-29 09:57:50,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=d323509fbea012a39e976856d72fb9d5, ASSIGN}] 2023-05-29 09:57:50,994 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=d323509fbea012a39e976856d72fb9d5, ASSIGN 2023-05-29 09:57:50,995 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=d323509fbea012a39e976856d72fb9d5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43005,1685354269815; forceNewPlan=false, retain=false 2023-05-29 09:57:51,146 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d323509fbea012a39e976856d72fb9d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:51,146 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685354271146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354271146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354271146"}]},"ts":"1685354271146"} 2023-05-29 09:57:51,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure d323509fbea012a39e976856d72fb9d5, server=jenkins-hbase4.apache.org,43005,1685354269815}] 2023-05-29 09:57:51,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:51,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d323509fbea012a39e976856d72fb9d5, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:57:51,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:57:51,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,309 INFO [StoreOpener-d323509fbea012a39e976856d72fb9d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,310 DEBUG [StoreOpener-d323509fbea012a39e976856d72fb9d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5/info 2023-05-29 09:57:51,310 DEBUG [StoreOpener-d323509fbea012a39e976856d72fb9d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5/info 2023-05-29 09:57:51,310 INFO [StoreOpener-d323509fbea012a39e976856d72fb9d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d323509fbea012a39e976856d72fb9d5 columnFamilyName info 2023-05-29 09:57:51,311 INFO [StoreOpener-d323509fbea012a39e976856d72fb9d5-1] regionserver.HStore(310): Store=d323509fbea012a39e976856d72fb9d5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:57:51,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d323509fbea012a39e976856d72fb9d5 2023-05-29 09:57:51,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/data/default/TestLogRolling-testLogRollOnPipelineRestart/d323509fbea012a39e976856d72fb9d5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:57:51,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d323509fbea012a39e976856d72fb9d5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=752619, jitterRate=-0.042996153235435486}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:57:51,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d323509fbea012a39e976856d72fb9d5: 2023-05-29 09:57:51,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5., pid=11, masterSystemTime=1685354271301 2023-05-29 09:57:51,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:51,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:57:51,323 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=d323509fbea012a39e976856d72fb9d5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:57:51,323 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1685354271322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354271322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354271322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354271322"}]},"ts":"1685354271322"} 2023-05-29 09:57:51,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 09:57:51,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure d323509fbea012a39e976856d72fb9d5, server=jenkins-hbase4.apache.org,43005,1685354269815 in 176 msec 2023-05-29 09:57:51,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 09:57:51,330 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=d323509fbea012a39e976856d72fb9d5, ASSIGN in 335 msec 2023-05-29 09:57:51,331 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:57:51,331 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354271331"}]},"ts":"1685354271331"} 2023-05-29 09:57:51,332 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-29 09:57:51,336 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:57:51,337 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 385 msec 2023-05-29 09:57:53,639 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 09:57:56,065 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 09:57:56,066 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-29 09:58:00,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:58:00,957 INFO [Listener at localhost/38063] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-29 09:58:00,960 DEBUG [Listener at localhost/38063] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-29 09:58:00,960 DEBUG [Listener at localhost/38063] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:02,966 INFO [Listener at localhost/38063] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 2023-05-29 09:58:02,966 WARN [Listener at localhost/38063] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:02,968 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:02,970 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008 java.io.IOException: Bad response ERROR for BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008 from datanode DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-29 09:58:02,970 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.meta.1685354270341.meta block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]) is bad. 2023-05-29 09:58:02,969 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:02,970 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]) is bad. 2023-05-29 09:58:02,970 WARN [PacketResponder: BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:36191]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,971 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354269891 block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK], DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36191,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]) is bad. 2023-05-29 09:58:02,971 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:56862 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56862 dst: /127.0.0.1:33851 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,975 INFO [Listener at localhost/38063] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:02,976 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:60520 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:60520 dst: /127.0.0.1:33851 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33851 remote=/127.0.0.1:60520]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,977 WARN [PacketResponder: BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33851]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,977 WARN [PacketResponder: BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33851]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,977 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270640940_17 at /127.0.0.1:56838 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:56838 dst: /127.0.0.1:33851 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33851 remote=/127.0.0.1:56838]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,980 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:58904 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:36191:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58904 dst: /127.0.0.1:36191 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:02,980 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270640940_17 at /127.0.0.1:57688 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:36191:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57688 dst: /127.0.0.1:36191 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:03,078 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:57712 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:36191:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57712 dst: /127.0.0.1:36191 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:03,078 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:58:03,079 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 801f60e3-a413-4fb4-9db9-e6a199dd6757) service to localhost/127.0.0.1:44117 2023-05-29 09:58:03,079 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data3/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:03,080 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data4/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:03,086 WARN [Listener at localhost/38063] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:03,088 WARN [Listener at localhost/38063] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:03,089 INFO [Listener at localhost/38063] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:03,093 INFO [Listener at localhost/38063] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_40027_datanode____xhcqu8/webapp 2023-05-29 09:58:03,183 INFO [Listener at localhost/38063] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40027 2023-05-29 09:58:03,190 WARN [Listener at localhost/39153] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:03,194 WARN [Listener at localhost/39153] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:03,195 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:03,195 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:03,195 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:03,199 INFO [Listener at localhost/39153] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:03,264 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x443889f72c695d74: Processing first storage report for DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:58:03,265 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x443889f72c695d74: from storage DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 node DatanodeRegistration(127.0.0.1:34661, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=39635, infoSecurePort=0, ipcPort=39153, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:03,265 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x443889f72c695d74: Processing first storage report for DS-581e91c0-2dfc-4d53-a925-783c14940cc4 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:58:03,265 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x443889f72c695d74: from storage DS-581e91c0-2dfc-4d53-a925-783c14940cc4 node DatanodeRegistration(127.0.0.1:34661, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=39635, infoSecurePort=0, ipcPort=39153, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:03,301 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270640940_17 at /127.0.0.1:39330 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39330 dst: /127.0.0.1:33851 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:03,302 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:58:03,302 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:39348 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39348 dst: /127.0.0.1:33851 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:03,301 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:39346 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:33851:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39346 dst: /127.0.0.1:33851 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:03,303 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f) service to localhost/127.0.0.1:44117 2023-05-29 09:58:03,305 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:03,305 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data2/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:03,312 WARN [Listener at localhost/39153] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:03,314 WARN [Listener at localhost/39153] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:03,315 INFO [Listener at localhost/39153] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:03,319 INFO [Listener at localhost/39153] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_36311_datanode____1ojebb/webapp 2023-05-29 09:58:03,410 INFO [Listener at localhost/39153] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36311 2023-05-29 09:58:03,418 WARN [Listener at localhost/41639] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:03,484 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7eda387a9ddd9c9: Processing first storage report for DS-11483388-1616-4a42-8c92-d4a7c05684e0 from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:58:03,484 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7eda387a9ddd9c9: from storage DS-11483388-1616-4a42-8c92-d4a7c05684e0 node DatanodeRegistration(127.0.0.1:36257, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=41063, infoSecurePort=0, ipcPort=41639, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 09:58:03,485 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf7eda387a9ddd9c9: Processing first storage report for DS-d31b9586-f635-4a6f-bf14-0511700fb98a from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:58:03,485 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf7eda387a9ddd9c9: from storage DS-d31b9586-f635-4a6f-bf14-0511700fb98a node DatanodeRegistration(127.0.0.1:36257, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=41063, infoSecurePort=0, ipcPort=41639, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:04,422 INFO [Listener at localhost/41639] wal.TestLogRolling(481): Data Nodes restarted 2023-05-29 09:58:04,424 INFO [Listener at localhost/41639] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-29 09:58:04,425 WARN [RS:0;jenkins-hbase4:43005.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:04,426 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43005%2C1685354269815:(num 1685354270197) roll requested 2023-05-29 09:58:04,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:04,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:59166 deadline: 1685354294424, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-29 09:58:04,435 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 newFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 2023-05-29 09:58:04,435 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-29 09:58:04,435 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 2023-05-29 09:58:04,435 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36257,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] 2023-05-29 09:58:04,435 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:04,435 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 is not closed yet, will try archiving it next time 2023-05-29 09:58:04,436 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:16,477 INFO [Listener at localhost/41639] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-29 09:58:18,479 WARN [Listener at localhost/41639] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:18,480 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:18,481 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:36257,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:36257,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]) is bad. 2023-05-29 09:58:18,484 INFO [Listener at localhost/41639] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:18,485 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f) service to localhost/127.0.0.1:44117 2023-05-29 09:58:18,486 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:39216 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:34661:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39216 dst: /127.0.0.1:34661 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34661 remote=/127.0.0.1:39216]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:18,487 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:18,487 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:47002 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:36257:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47002 dst: /127.0.0.1:36257 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:18,488 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data2/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:18,495 WARN [Listener at localhost/41639] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:18,497 WARN [Listener at localhost/41639] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:18,499 INFO [Listener at localhost/41639] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:18,504 INFO [Listener at localhost/41639] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_34307_datanode____.26nk0w/webapp 2023-05-29 09:58:18,594 INFO [Listener at localhost/41639] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34307 2023-05-29 09:58:18,602 WARN [Listener at localhost/46059] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:18,606 WARN [Listener at localhost/46059] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:18,606 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:18,613 INFO [Listener at localhost/46059] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:18,670 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe90932d90457677c: Processing first storage report for DS-11483388-1616-4a42-8c92-d4a7c05684e0 from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:58:18,670 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe90932d90457677c: from storage DS-11483388-1616-4a42-8c92-d4a7c05684e0 node DatanodeRegistration(127.0.0.1:43345, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=33585, infoSecurePort=0, ipcPort=46059, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 8, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 09:58:18,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe90932d90457677c: Processing first storage report for DS-d31b9586-f635-4a6f-bf14-0511700fb98a from datanode 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f 2023-05-29 09:58:18,671 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe90932d90457677c: from storage DS-d31b9586-f635-4a6f-bf14-0511700fb98a node DatanodeRegistration(127.0.0.1:43345, datanodeUuid=0f1125b3-9aea-4ec7-a06e-7787eac7cc2f, infoPort=33585, infoSecurePort=0, ipcPort=46059, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:18,717 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_719831121_17 at /127.0.0.1:38308 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:34661:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:38308 dst: /127.0.0.1:34661 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:18,719 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:58:18,719 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 801f60e3-a413-4fb4-9db9-e6a199dd6757) service to localhost/127.0.0.1:44117 2023-05-29 09:58:18,719 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data3/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:18,720 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data4/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:18,726 WARN [Listener at localhost/46059] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:18,728 WARN [Listener at localhost/46059] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:18,730 INFO [Listener at localhost/46059] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:18,734 INFO [Listener at localhost/46059] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/java.io.tmpdir/Jetty_localhost_35465_datanode____.q5nz2g/webapp 2023-05-29 09:58:18,825 INFO [Listener at localhost/46059] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35465 2023-05-29 09:58:18,832 WARN [Listener at localhost/34383] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:18,893 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb7f6992a6de07: Processing first storage report for DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:58:18,894 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb7f6992a6de07: from storage DS-10ae971d-cae9-44b1-a859-20c034d3e7c5 node DatanodeRegistration(127.0.0.1:39759, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=40491, infoSecurePort=0, ipcPort=34383, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-29 09:58:18,894 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbb7f6992a6de07: Processing first storage report for DS-581e91c0-2dfc-4d53-a925-783c14940cc4 from datanode 801f60e3-a413-4fb4-9db9-e6a199dd6757 2023-05-29 09:58:18,894 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbb7f6992a6de07: from storage DS-581e91c0-2dfc-4d53-a925-783c14940cc4 node DatanodeRegistration(127.0.0.1:39759, datanodeUuid=801f60e3-a413-4fb4-9db9-e6a199dd6757, infoPort=40491, infoSecurePort=0, ipcPort=34383, storageInfo=lv=-57;cid=testClusterID;nsid=2128699275;c=1685354269198), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:19,836 INFO [Listener at localhost/34383] wal.TestLogRolling(498): Data Nodes restarted 2023-05-29 09:58:19,838 INFO [Listener at localhost/34383] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-29 09:58:19,839 WARN [RS:0;jenkins-hbase4:43005.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,839 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43005%2C1685354269815:(num 1685354284427) roll requested 2023-05-29 09:58:19,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43005] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:59166 deadline: 1685354309838, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-29 09:58:19,847 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 newFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 2023-05-29 09:58:19,848 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-29 09:58:19,848 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 2023-05-29 09:58:19,848 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43345,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:39759,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] 2023-05-29 09:58:19,848 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,848 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 is not closed yet, will try archiving it next time 2023-05-29 09:58:19,848 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34661,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,972 WARN [master/jenkins-hbase4:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,973 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C35407%2C1685354269758:(num 1685354269891) roll requested 2023-05-29 09:58:19,973 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,973 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,981 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-29 09:58:19,981 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354269891 with entries=88, filesize=43.80 KB; new WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354299973 2023-05-29 09:58:19,981 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43345,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:39759,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] 2023-05-29 09:58:19,982 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354269891 is not closed yet, will try archiving it next time 2023-05-29 09:58:19,982 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:19,982 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758/jenkins-hbase4.apache.org%2C35407%2C1685354269758.1685354269891; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:31,885 DEBUG [Listener at localhost/34383] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 newFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 2023-05-29 09:58:31,886 INFO [Listener at localhost/34383] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 2023-05-29 09:58:31,891 DEBUG [Listener at localhost/34383] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43345,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:39759,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]] 2023-05-29 09:58:31,891 DEBUG [Listener at localhost/34383] wal.AbstractFSWAL(716): hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 is not closed yet, will try archiving it next time 2023-05-29 09:58:31,891 DEBUG [Listener at localhost/34383] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 2023-05-29 09:58:31,892 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 2023-05-29 09:58:31,895 WARN [IPC Server handler 4 on default port 44117] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1016 2023-05-29 09:58:31,897 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 after 5ms 2023-05-29 09:58:32,917 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@71d64317] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-292774215-172.31.14.131-1685354269198:blk_1073741832_1016, datanode=DatanodeInfoWithStorage[127.0.0.1:39759,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1016, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2160 getBytesOnDisk() = 2160 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data4/current/BP-292774215-172.31.14.131-1685354269198/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:35,898 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 after 4006ms 2023-05-29 09:58:35,898 DEBUG [Listener at localhost/34383] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354270197 2023-05-29 09:58:35,907 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685354270764/Put/vlen=175/seqid=0] 2023-05-29 09:58:35,907 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #4: [default/info:d/1685354270810/Put/vlen=9/seqid=0] 2023-05-29 09:58:35,907 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #5: [hbase/info:d/1685354270832/Put/vlen=7/seqid=0] 2023-05-29 09:58:35,907 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1685354271319/Put/vlen=231/seqid=0] 2023-05-29 09:58:35,908 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #4: [row1002/info:/1685354280964/Put/vlen=1045/seqid=0] 2023-05-29 09:58:35,908 DEBUG [Listener at localhost/34383] wal.ProtobufLogReader(420): EOF at position 2160 2023-05-29 09:58:35,908 DEBUG [Listener at localhost/34383] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 2023-05-29 09:58:35,908 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 2023-05-29 09:58:35,908 WARN [IPC Server handler 1 on default port 44117] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-29 09:58:35,909 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 after 1ms 2023-05-29 09:58:36,898 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@100b63f4] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-292774215-172.31.14.131-1685354269198:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:43345,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current/BP-292774215-172.31.14.131-1685354269198/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current/BP-292774215-172.31.14.131-1685354269198/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-29 09:58:39,909 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 after 4001ms 2023-05-29 09:58:39,909 DEBUG [Listener at localhost/34383] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354284427 2023-05-29 09:58:39,913 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #6: [row1003/info:/1685354294472/Put/vlen=1045/seqid=0] 2023-05-29 09:58:39,913 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #7: [row1004/info:/1685354296477/Put/vlen=1045/seqid=0] 2023-05-29 09:58:39,913 DEBUG [Listener at localhost/34383] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-29 09:58:39,913 DEBUG [Listener at localhost/34383] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 2023-05-29 09:58:39,913 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 2023-05-29 09:58:39,914 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 after 1ms 2023-05-29 09:58:39,914 DEBUG [Listener at localhost/34383] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354299840 2023-05-29 09:58:39,917 DEBUG [Listener at localhost/34383] wal.TestLogRolling(522): #9: [row1005/info:/1685354309872/Put/vlen=1045/seqid=0] 2023-05-29 09:58:39,917 DEBUG [Listener at localhost/34383] wal.TestLogRolling(512): recovering lease for hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 2023-05-29 09:58:39,917 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 2023-05-29 09:58:39,917 WARN [IPC Server handler 3 on default port 44117] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-29 09:58:39,918 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 after 1ms 2023-05-29 09:58:40,897 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270640940_17 at /127.0.0.1:37520 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:43345:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:37520 dst: /127.0.0.1:43345 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:43345 remote=/127.0.0.1:37520]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:40,898 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_270640940_17 at /127.0.0.1:57114 [Receiving block BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:39759:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:57114 dst: /127.0.0.1:39759 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:40,897 WARN [ResponseProcessor for block BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-29 09:58:40,898 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 block BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:43345,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK], DatanodeInfoWithStorage[127.0.0.1:39759,DS-10ae971d-cae9-44b1-a859-20c034d3e7c5,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:43345,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]) is bad. 2023-05-29 09:58:40,903 WARN [DataStreamer for file /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 block BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,918 INFO [Listener at localhost/34383] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 after 4001ms 2023-05-29 09:58:43,919 DEBUG [Listener at localhost/34383] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 2023-05-29 09:58:43,922 DEBUG [Listener at localhost/34383] wal.ProtobufLogReader(420): EOF at position 83 2023-05-29 09:58:43,923 INFO [Listener at localhost/34383] regionserver.HRegion(2745): Flushing b319e348cde1fd1d3c095283a666d414 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 09:58:43,924 WARN [RS:0;jenkins-hbase4:43005.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=7, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,925 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43005%2C1685354269815:(num 1685354311875) roll requested 2023-05-29 09:58:43,925 DEBUG [Listener at localhost/34383] regionserver.HRegion(2446): Flush status journal for b319e348cde1fd1d3c095283a666d414: 2023-05-29 09:58:43,925 INFO [Listener at localhost/34383] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,926 INFO [Listener at localhost/34383] regionserver.HRegion(2745): Flushing d323509fbea012a39e976856d72fb9d5 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-29 09:58:43,927 DEBUG [Listener at localhost/34383] regionserver.HRegion(2446): Flush status journal for d323509fbea012a39e976856d72fb9d5: 2023-05-29 09:58:43,927 INFO [Listener at localhost/34383] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,928 INFO [Listener at localhost/34383] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.95 KB heapSize=5.48 KB 2023-05-29 09:58:43,929 WARN [RS_OPEN_META-regionserver/jenkins-hbase4:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,929 DEBUG [Listener at localhost/34383] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 09:58:43,929 INFO [Listener at localhost/34383] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,932 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 09:58:43,932 INFO [Listener at localhost/34383] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 09:58:43,933 DEBUG [Listener at localhost/34383] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f7802f0 to 127.0.0.1:50576 2023-05-29 09:58:43,934 DEBUG [Listener at localhost/34383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:58:43,934 DEBUG [Listener at localhost/34383] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 09:58:43,934 DEBUG [Listener at localhost/34383] util.JVMClusterUtil(257): Found active master hash=1080010496, stopped=false 2023-05-29 09:58:43,934 INFO [Listener at localhost/34383] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:58:43,936 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:58:43,936 INFO [Listener at localhost/34383] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 09:58:43,936 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 newFile=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354323925 2023-05-29 09:58:43,936 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:43,936 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:58:43,936 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=7, requesting roll of WAL 2023-05-29 09:58:43,936 DEBUG [Listener at localhost/34383] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6eb34b8c to 127.0.0.1:50576 2023-05-29 09:58:43,937 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354323925 2023-05-29 09:58:43,937 DEBUG [Listener at localhost/34383] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:58:43,937 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:58:43,937 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:58:43,937 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,938 INFO [Listener at localhost/34383] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,43005,1685354269815' ***** 2023-05-29 09:58:43,938 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875 failed. Cause="Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-29 09:58:43,938 INFO [Listener at localhost/34383] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 09:58:43,938 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,938 INFO [RS:0;jenkins-hbase4:43005] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 09:58:43,938 ERROR [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815/jenkins-hbase4.apache.org%2C43005%2C1685354269815.1685354311875, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-292774215-172.31.14.131-1685354269198:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,938 INFO [RS:0;jenkins-hbase4:43005] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 09:58:43,939 INFO [RS:0;jenkins-hbase4:43005] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 09:58:43,938 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 09:58:43,939 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for b319e348cde1fd1d3c095283a666d414 2023-05-29 09:58:43,939 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:58:43,939 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for d323509fbea012a39e976856d72fb9d5 2023-05-29 09:58:43,940 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:43,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b319e348cde1fd1d3c095283a666d414, disabling compactions & flushes 2023-05-29 09:58:43,941 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:58:43,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,941 DEBUG [RS:0;jenkins-hbase4:43005] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0de1a4d6 to 127.0.0.1:50576 2023-05-29 09:58:43,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,941 DEBUG [RS:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:58:43,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. after waiting 0 ms 2023-05-29 09:58:43,942 INFO [RS:0;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 09:58:43,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,942 INFO [RS:0;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 09:58:43,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b319e348cde1fd1d3c095283a666d414 1/1 column families, dataSize=78 B heapSize=728 B 2023-05-29 09:58:43,942 INFO [RS:0;jenkins-hbase4:43005] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 09:58:43,942 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/WALs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:58:43,942 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:58:43,943 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,943 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 09:58:43,943 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:58:43,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b319e348cde1fd1d3c095283a666d414: 2023-05-29 09:58:43,943 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:58:43,943 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1478): Online Regions={b319e348cde1fd1d3c095283a666d414=hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414., d323509fbea012a39e976856d72fb9d5=TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5., 1588230740=hbase:meta,,1.1588230740} 2023-05-29 09:58:43,943 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase4.apache.org,43005,1685354269815: Unrecoverable exception while closing hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. ***** java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:43,943 WARN [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:33851,DS-11483388-1616-4a42-8c92-d4a7c05684e0,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-29 09:58:43,944 ERROR [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-29 09:58:43,943 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for b319e348cde1fd1d3c095283a666d414 2023-05-29 09:58:43,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:58:43,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-29 09:58:43,944 DEBUG [regionserver/jenkins-hbase4:0.logRoller] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Failed log close in log roller 2023-05-29 09:58:43,944 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1504): Waiting on 1588230740, b319e348cde1fd1d3c095283a666d414, d323509fbea012a39e976856d72fb9d5 2023-05-29 09:58:43,944 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase4.apache.org%2C43005%2C1685354269815.meta:.meta(num 1685354270341) roll requested 2023-05-29 09:58:43,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:58:43,944 DEBUG [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-29 09:58:43,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:58:43,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:58:43,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-29 09:58:43,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-29 09:58:43,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-29 09:58:43,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-29 09:58:43,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1183842304, "init": 513802240, "max": 2051014656, "used": 590944352 }, "NonHeapMemoryUsage": { "committed": 139681792, "init": 2555904, "max": -1, "used": 137088560 }, "Verbose": false, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-29 09:58:43,946 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35407] master.MasterRpcServices(609): jenkins-hbase4.apache.org,43005,1685354269815 reported a fatal error: ***** ABORTING region server jenkins-hbase4.apache.org,43005,1685354269815: Unrecoverable exception while closing hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. ***** Cause: java.io.IOException: Cannot append; log is closed, regionName = hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2700) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-29 09:58:43,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d323509fbea012a39e976856d72fb9d5, disabling compactions & flushes 2023-05-29 09:58:43,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. after waiting 0 ms 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d323509fbea012a39e976856d72fb9d5: 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b319e348cde1fd1d3c095283a666d414, disabling compactions & flushes 2023-05-29 09:58:43,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. after waiting 0 ms 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b319e348cde1fd1d3c095283a666d414: 2023-05-29 09:58:43,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1685354270397.b319e348cde1fd1d3c095283a666d414. 2023-05-29 09:58:44,071 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 09:58:44,071 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 09:58:44,073 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:58:44,144 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for d323509fbea012a39e976856d72fb9d5 2023-05-29 09:58:44,144 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:58:44,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d323509fbea012a39e976856d72fb9d5, disabling compactions & flushes 2023-05-29 09:58:44,144 DEBUG [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1504): Waiting on 1588230740, d323509fbea012a39e976856d72fb9d5 2023-05-29 09:58:44,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:44,144 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:44,145 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. after waiting 0 ms 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d323509fbea012a39e976856d72fb9d5: 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1685354270950.d323509fbea012a39e976856d72fb9d5. 2023-05-29 09:58:44,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-29 09:58:44,345 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-29 09:58:44,345 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43005,1685354269815; all regions closed. 2023-05-29 09:58:44,345 DEBUG [RS:0;jenkins-hbase4:43005] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:58:44,345 INFO [RS:0;jenkins-hbase4:43005] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:58:44,345 INFO [RS:0;jenkins-hbase4:43005] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-29 09:58:44,345 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:58:44,346 INFO [RS:0;jenkins-hbase4:43005] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43005 2023-05-29 09:58:44,349 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:58:44,349 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43005,1685354269815 2023-05-29 09:58:44,349 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:58:44,350 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43005,1685354269815] 2023-05-29 09:58:44,350 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43005,1685354269815; numProcessing=1 2023-05-29 09:58:44,351 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43005,1685354269815 already deleted, retry=false 2023-05-29 09:58:44,351 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43005,1685354269815 expired; onlineServers=0 2023-05-29 09:58:44,351 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,35407,1685354269758' ***** 2023-05-29 09:58:44,351 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 09:58:44,351 DEBUG [M:0;jenkins-hbase4:35407] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44602bd3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:58:44,351 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:58:44,351 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35407,1685354269758; all regions closed. 2023-05-29 09:58:44,351 DEBUG [M:0;jenkins-hbase4:35407] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:58:44,352 DEBUG [M:0;jenkins-hbase4:35407] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 09:58:44,352 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 09:58:44,352 DEBUG [M:0;jenkins-hbase4:35407] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 09:58:44,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354269973] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354269973,5,FailOnTimeoutGroup] 2023-05-29 09:58:44,352 INFO [M:0;jenkins-hbase4:35407] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 09:58:44,353 INFO [M:0;jenkins-hbase4:35407] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 09:58:44,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354269974] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354269974,5,FailOnTimeoutGroup] 2023-05-29 09:58:44,353 INFO [M:0;jenkins-hbase4:35407] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 09:58:44,353 DEBUG [M:0;jenkins-hbase4:35407] master.HMaster(1512): Stopping service threads 2023-05-29 09:58:44,353 INFO [M:0;jenkins-hbase4:35407] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 09:58:44,353 ERROR [M:0;jenkins-hbase4:35407] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 09:58:44,354 INFO [M:0;jenkins-hbase4:35407] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 09:58:44,354 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 09:58:44,354 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 09:58:44,354 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:44,354 DEBUG [M:0;jenkins-hbase4:35407] zookeeper.ZKUtil(398): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 09:58:44,354 WARN [M:0;jenkins-hbase4:35407] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 09:58:44,354 INFO [M:0;jenkins-hbase4:35407] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 09:58:44,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:58:44,354 INFO [M:0;jenkins-hbase4:35407] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 09:58:44,355 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:58:44,355 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:44,355 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:44,355 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:58:44,355 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:44,355 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.17 KB heapSize=45.78 KB 2023-05-29 09:58:44,368 INFO [M:0;jenkins-hbase4:35407] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.17 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/54682efc9f234dd7bb7f011f31efd45f 2023-05-29 09:58:44,375 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/54682efc9f234dd7bb7f011f31efd45f as hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/54682efc9f234dd7bb7f011f31efd45f 2023-05-29 09:58:44,380 INFO [M:0;jenkins-hbase4:35407] regionserver.HStore(1080): Added hdfs://localhost:44117/user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/54682efc9f234dd7bb7f011f31efd45f, entries=11, sequenceid=92, filesize=7.0 K 2023-05-29 09:58:44,381 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegion(2948): Finished flush of dataSize ~38.17 KB/39087, heapSize ~45.77 KB/46864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=92, compaction requested=false 2023-05-29 09:58:44,383 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:44,383 DEBUG [M:0;jenkins-hbase4:35407] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:58:44,383 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/1131b5b5-b4e0-5308-ca6f-071a65151e4d/MasterData/WALs/jenkins-hbase4.apache.org,35407,1685354269758 2023-05-29 09:58:44,387 INFO [M:0;jenkins-hbase4:35407] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 09:58:44,387 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:58:44,387 INFO [M:0;jenkins-hbase4:35407] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35407 2023-05-29 09:58:44,390 DEBUG [M:0;jenkins-hbase4:35407] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35407,1685354269758 already deleted, retry=false 2023-05-29 09:58:44,450 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:58:44,450 INFO [RS:0;jenkins-hbase4:43005] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43005,1685354269815; zookeeper connection closed. 2023-05-29 09:58:44,450 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): regionserver:43005-0x100765fcf2f0001, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:58:44,451 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5694413b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5694413b 2023-05-29 09:58:44,454 INFO [Listener at localhost/34383] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 09:58:44,550 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:58:44,550 INFO [M:0;jenkins-hbase4:35407] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35407,1685354269758; zookeeper connection closed. 2023-05-29 09:58:44,551 DEBUG [Listener at localhost/38063-EventThread] zookeeper.ZKWatcher(600): master:35407-0x100765fcf2f0000, quorum=127.0.0.1:50576, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:58:44,552 WARN [Listener at localhost/34383] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:44,555 INFO [Listener at localhost/34383] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:44,659 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:58:44,659 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 801f60e3-a413-4fb4-9db9-e6a199dd6757) service to localhost/127.0.0.1:44117 2023-05-29 09:58:44,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data3/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:44,660 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data4/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:44,662 WARN [Listener at localhost/34383] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:58:44,666 INFO [Listener at localhost/34383] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:44,671 WARN [BP-292774215-172.31.14.131-1685354269198 heartbeating to localhost/127.0.0.1:44117] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-292774215-172.31.14.131-1685354269198 (Datanode Uuid 0f1125b3-9aea-4ec7-a06e-7787eac7cc2f) service to localhost/127.0.0.1:44117 2023-05-29 09:58:44,672 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data1/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:44,673 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/cluster_fb2c74d6-2820-8bfa-4e37-0fa76a68e2c0/dfs/data/data2/current/BP-292774215-172.31.14.131-1685354269198] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:58:44,781 INFO [Listener at localhost/34383] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:58:44,892 INFO [Listener at localhost/34383] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 09:58:44,905 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 09:58:44,915 INFO [Listener at localhost/34383] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=85 (was 75) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:44117 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:44117 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34383 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (244949468) connection to localhost/127.0.0.1:44117 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44117 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=463 (was 460) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=37 (was 71), ProcessCount=168 (was 166) - ProcessCount LEAK? -, AvailableMemoryMB=3178 (was 3400) 2023-05-29 09:58:44,923 INFO [Listener at localhost/34383] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=85, OpenFileDescriptor=463, MaxFileDescriptor=60000, SystemLoadAverage=37, ProcessCount=168, AvailableMemoryMB=3178 2023-05-29 09:58:44,923 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 09:58:44,923 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/hadoop.log.dir so I do NOT create it in target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0 2023-05-29 09:58:44,923 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/1d4c8bc1-22b1-06f4-a1a2-c8fa614a494f/hadoop.tmp.dir so I do NOT create it in target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0 2023-05-29 09:58:44,923 INFO [Listener at localhost/34383] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524, deleteOnExit=true 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/test.cache.data in system properties and HBase conf 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/hadoop.log.dir in system properties and HBase conf 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 09:58:44,924 DEBUG [Listener at localhost/34383] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 09:58:44,924 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/nfs.dump.dir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/java.io.tmpdir in system properties and HBase conf 2023-05-29 09:58:44,925 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:58:44,926 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 09:58:44,926 INFO [Listener at localhost/34383] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 09:58:44,927 WARN [Listener at localhost/34383] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:58:44,931 WARN [Listener at localhost/34383] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:58:44,931 WARN [Listener at localhost/34383] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:58:44,971 WARN [Listener at localhost/34383] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:44,973 INFO [Listener at localhost/34383] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:44,977 INFO [Listener at localhost/34383] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/java.io.tmpdir/Jetty_localhost_43937_hdfs____.9bbcgo/webapp 2023-05-29 09:58:45,067 INFO [Listener at localhost/34383] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43937 2023-05-29 09:58:45,068 WARN [Listener at localhost/34383] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:58:45,072 WARN [Listener at localhost/34383] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:58:45,072 WARN [Listener at localhost/34383] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:58:45,116 WARN [Listener at localhost/40249] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:45,125 WARN [Listener at localhost/40249] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:45,127 WARN [Listener at localhost/40249] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:45,128 INFO [Listener at localhost/40249] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:45,132 INFO [Listener at localhost/40249] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/java.io.tmpdir/Jetty_localhost_36017_datanode____.v74m86/webapp 2023-05-29 09:58:45,232 INFO [Listener at localhost/40249] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36017 2023-05-29 09:58:45,241 WARN [Listener at localhost/35383] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:45,271 WARN [Listener at localhost/35383] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:58:45,275 WARN [Listener at localhost/35383] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:58:45,276 INFO [Listener at localhost/35383] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:58:45,280 INFO [Listener at localhost/35383] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/java.io.tmpdir/Jetty_localhost_45277_datanode____.ob3fs/webapp 2023-05-29 09:58:45,354 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4589ba6e62ffac: Processing first storage report for DS-a455a67d-83ec-4415-bc54-b604e0ffe74f from datanode 6e3fcf71-6658-418b-a66f-9a7914b851f6 2023-05-29 09:58:45,354 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4589ba6e62ffac: from storage DS-a455a67d-83ec-4415-bc54-b604e0ffe74f node DatanodeRegistration(127.0.0.1:41387, datanodeUuid=6e3fcf71-6658-418b-a66f-9a7914b851f6, infoPort=34685, infoSecurePort=0, ipcPort=35383, storageInfo=lv=-57;cid=testClusterID;nsid=1863162217;c=1685354324935), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:45,354 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4589ba6e62ffac: Processing first storage report for DS-5fd4e147-cd71-42b6-8f88-fea57ae2ef78 from datanode 6e3fcf71-6658-418b-a66f-9a7914b851f6 2023-05-29 09:58:45,354 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4589ba6e62ffac: from storage DS-5fd4e147-cd71-42b6-8f88-fea57ae2ef78 node DatanodeRegistration(127.0.0.1:41387, datanodeUuid=6e3fcf71-6658-418b-a66f-9a7914b851f6, infoPort=34685, infoSecurePort=0, ipcPort=35383, storageInfo=lv=-57;cid=testClusterID;nsid=1863162217;c=1685354324935), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:45,379 INFO [Listener at localhost/35383] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45277 2023-05-29 09:58:45,391 WARN [Listener at localhost/40607] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:58:45,479 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4295a9ccbc9de09: Processing first storage report for DS-423065d6-142a-4e49-925a-6954049ab4d3 from datanode 1d4617ca-7744-474b-b3c8-34399e37fd04 2023-05-29 09:58:45,479 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4295a9ccbc9de09: from storage DS-423065d6-142a-4e49-925a-6954049ab4d3 node DatanodeRegistration(127.0.0.1:34275, datanodeUuid=1d4617ca-7744-474b-b3c8-34399e37fd04, infoPort=36813, infoSecurePort=0, ipcPort=40607, storageInfo=lv=-57;cid=testClusterID;nsid=1863162217;c=1685354324935), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:45,479 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4295a9ccbc9de09: Processing first storage report for DS-a45abe80-29d8-402c-9d40-808f61137d4b from datanode 1d4617ca-7744-474b-b3c8-34399e37fd04 2023-05-29 09:58:45,479 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4295a9ccbc9de09: from storage DS-a45abe80-29d8-402c-9d40-808f61137d4b node DatanodeRegistration(127.0.0.1:34275, datanodeUuid=1d4617ca-7744-474b-b3c8-34399e37fd04, infoPort=36813, infoSecurePort=0, ipcPort=40607, storageInfo=lv=-57;cid=testClusterID;nsid=1863162217;c=1685354324935), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:58:45,498 DEBUG [Listener at localhost/40607] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0 2023-05-29 09:58:45,501 INFO [Listener at localhost/40607] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/zookeeper_0, clientPort=55831, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 09:58:45,502 INFO [Listener at localhost/40607] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55831 2023-05-29 09:58:45,502 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,503 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,515 INFO [Listener at localhost/40607] util.FSUtils(471): Created version file at hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321 with version=8 2023-05-29 09:58:45,515 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 09:58:45,517 INFO [Listener at localhost/40607] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:58:45,517 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,517 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,517 INFO [Listener at localhost/40607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:58:45,517 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,518 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:58:45,518 INFO [Listener at localhost/40607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:58:45,519 INFO [Listener at localhost/40607] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36093 2023-05-29 09:58:45,519 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,520 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,521 INFO [Listener at localhost/40607] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36093 connecting to ZooKeeper ensemble=127.0.0.1:55831 2023-05-29 09:58:45,528 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:360930x0, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:58:45,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36093-0x1007660a8ff0000 connected 2023-05-29 09:58:45,540 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:58:45,540 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:58:45,541 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:58:45,541 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36093 2023-05-29 09:58:45,541 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36093 2023-05-29 09:58:45,541 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36093 2023-05-29 09:58:45,542 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36093 2023-05-29 09:58:45,542 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36093 2023-05-29 09:58:45,542 INFO [Listener at localhost/40607] master.HMaster(444): hbase.rootdir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321, hbase.cluster.distributed=false 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:58:45,555 INFO [Listener at localhost/40607] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:58:45,556 INFO [Listener at localhost/40607] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33323 2023-05-29 09:58:45,557 INFO [Listener at localhost/40607] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:58:45,557 DEBUG [Listener at localhost/40607] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:58:45,558 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,559 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,560 INFO [Listener at localhost/40607] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33323 connecting to ZooKeeper ensemble=127.0.0.1:55831 2023-05-29 09:58:45,564 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:333230x0, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:58:45,565 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33323-0x1007660a8ff0001 connected 2023-05-29 09:58:45,565 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:58:45,565 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:58:45,566 DEBUG [Listener at localhost/40607] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:58:45,566 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33323 2023-05-29 09:58:45,566 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33323 2023-05-29 09:58:45,567 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33323 2023-05-29 09:58:45,567 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33323 2023-05-29 09:58:45,567 DEBUG [Listener at localhost/40607] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33323 2023-05-29 09:58:45,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,569 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:58:45,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,571 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:58:45,571 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:58:45,571 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:58:45,572 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:58:45,572 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36093,1685354325517 from backup master directory 2023-05-29 09:58:45,573 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,573 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:58:45,574 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:58:45,574 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/hbase.id with ID: 3d683bf2-157e-480d-bdaa-08ee8027addf 2023-05-29 09:58:45,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:45,606 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x51fb89aa to 127.0.0.1:55831 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:58:45,623 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3325022, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:58:45,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:58:45,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 09:58:45,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:58:45,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store-tmp 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:58:45,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:45,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:58:45,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:58:45,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/WALs/jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36093%2C1685354325517, suffix=, logDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/WALs/jenkins-hbase4.apache.org,36093,1685354325517, archiveDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/oldWALs, maxLogs=10 2023-05-29 09:58:45,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/WALs/jenkins-hbase4.apache.org,36093,1685354325517/jenkins-hbase4.apache.org%2C36093%2C1685354325517.1685354325640 2023-05-29 09:58:45,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK], DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK]] 2023-05-29 09:58:45,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:58:45,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:45,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,690 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,692 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 09:58:45,692 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 09:58:45,692 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:45,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:58:45,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:58:45,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=843607, jitterRate=0.07270261645317078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:58:45,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:58:45,701 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 09:58:45,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 09:58:45,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 09:58:45,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 09:58:45,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 09:58:45,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 09:58:45,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 09:58:45,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 09:58:45,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 09:58:45,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 09:58:45,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 09:58:45,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 09:58:45,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 09:58:45,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 09:58:45,720 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,720 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 09:58:45,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 09:58:45,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 09:58:45,723 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:58:45,723 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:58:45,723 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36093,1685354325517, sessionid=0x1007660a8ff0000, setting cluster-up flag (Was=false) 2023-05-29 09:58:45,728 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 09:58:45,733 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,735 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 09:58:45,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:45,741 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.hbase-snapshot/.tmp 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:58:45,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 09:58:45,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:58:45,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354355747 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 09:58:45,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 09:58:45,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 09:58:45,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 09:58:45,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 09:58:45,748 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:58:45,748 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 09:58:45,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 09:58:45,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 09:58:45,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354325749,5,FailOnTimeoutGroup] 2023-05-29 09:58:45,749 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354325749,5,FailOnTimeoutGroup] 2023-05-29 09:58:45,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,749 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:58:45,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 09:58:45,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,759 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:58:45,759 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:58:45,759 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321 2023-05-29 09:58:45,767 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:45,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:58:45,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/info 2023-05-29 09:58:45,770 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:58:45,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:45,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:58:45,772 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:58:45,772 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:58:45,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:45,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:58:45,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/table 2023-05-29 09:58:45,774 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:58:45,775 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:45,775 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740 2023-05-29 09:58:45,776 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740 2023-05-29 09:58:45,778 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:58:45,779 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:58:45,780 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:58:45,781 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=757477, jitterRate=-0.03681863844394684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:58:45,782 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:58:45,782 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:58:45,782 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:58:45,782 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(951): ClusterId : 3d683bf2-157e-480d-bdaa-08ee8027addf 2023-05-29 09:58:45,784 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:58:45,785 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:58:45,785 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 09:58:45,785 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 09:58:45,787 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 09:58:45,787 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:58:45,787 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:58:45,788 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 09:58:45,789 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:58:45,790 DEBUG [RS:0;jenkins-hbase4:33323] zookeeper.ReadOnlyZKClient(139): Connect 0x1452b546 to 127.0.0.1:55831 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:58:45,794 DEBUG [RS:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a9549bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:58:45,794 DEBUG [RS:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1eaade36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:58:45,803 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33323 2023-05-29 09:58:45,803 INFO [RS:0;jenkins-hbase4:33323] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:58:45,803 INFO [RS:0;jenkins-hbase4:33323] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:58:45,803 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:58:45,804 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,36093,1685354325517 with isa=jenkins-hbase4.apache.org/172.31.14.131:33323, startcode=1685354325554 2023-05-29 09:58:45,804 DEBUG [RS:0;jenkins-hbase4:33323] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:58:45,806 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42179, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:58:45,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,808 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321 2023-05-29 09:58:45,808 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40249 2023-05-29 09:58:45,808 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:58:45,809 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:58:45,810 DEBUG [RS:0;jenkins-hbase4:33323] zookeeper.ZKUtil(162): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,810 WARN [RS:0;jenkins-hbase4:33323] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:58:45,810 INFO [RS:0;jenkins-hbase4:33323] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:58:45,810 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1946): logDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,810 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33323,1685354325554] 2023-05-29 09:58:45,815 DEBUG [RS:0;jenkins-hbase4:33323] zookeeper.ZKUtil(162): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,815 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:58:45,816 INFO [RS:0;jenkins-hbase4:33323] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:58:45,817 INFO [RS:0;jenkins-hbase4:33323] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:58:45,817 INFO [RS:0;jenkins-hbase4:33323] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:58:45,817 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,817 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:58:45,819 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,819 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,819 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,819 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,819 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,819 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,820 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:58:45,820 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,820 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,820 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,820 DEBUG [RS:0;jenkins-hbase4:33323] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:58:45,821 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,821 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,821 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,833 INFO [RS:0;jenkins-hbase4:33323] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:58:45,833 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1685354325554-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:45,849 INFO [RS:0;jenkins-hbase4:33323] regionserver.Replication(203): jenkins-hbase4.apache.org,33323,1685354325554 started 2023-05-29 09:58:45,849 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33323,1685354325554, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33323, sessionid=0x1007660a8ff0001 2023-05-29 09:58:45,849 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:58:45,849 DEBUG [RS:0;jenkins-hbase4:33323] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,849 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33323,1685354325554' 2023-05-29 09:58:45,849 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33323,1685354325554' 2023-05-29 09:58:45,850 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:58:45,851 DEBUG [RS:0;jenkins-hbase4:33323] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:58:45,851 DEBUG [RS:0;jenkins-hbase4:33323] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:58:45,851 INFO [RS:0;jenkins-hbase4:33323] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:58:45,851 INFO [RS:0;jenkins-hbase4:33323] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:58:45,938 DEBUG [jenkins-hbase4:36093] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 09:58:45,940 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33323,1685354325554, state=OPENING 2023-05-29 09:58:45,941 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 09:58:45,942 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:45,942 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:58:45,942 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33323,1685354325554}] 2023-05-29 09:58:45,953 INFO [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33323%2C1685354325554, suffix=, logDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554, archiveDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs, maxLogs=32 2023-05-29 09:58:45,963 INFO [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354325954 2023-05-29 09:58:45,963 DEBUG [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK], DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK]] 2023-05-29 09:58:46,097 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:46,097 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 09:58:46,100 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 09:58:46,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 09:58:46,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:58:46,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33323%2C1685354325554.meta, suffix=.meta, logDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554, archiveDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs, maxLogs=32 2023-05-29 09:58:46,112 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.meta.1685354326105.meta 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK], DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK]] 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 09:58:46,112 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 09:58:46,112 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:46,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 09:58:46,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 09:58:46,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:58:46,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/info 2023-05-29 09:58:46,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/info 2023-05-29 09:58:46,115 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:58:46,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:46,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:58:46,116 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:58:46,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:58:46,117 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:58:46,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:46,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:58:46,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/table 2023-05-29 09:58:46,118 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/table 2023-05-29 09:58:46,118 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:58:46,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:46,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740 2023-05-29 09:58:46,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740 2023-05-29 09:58:46,123 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:58:46,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:58:46,125 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=802726, jitterRate=0.02071981132030487}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:58:46,125 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:58:46,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354326097 2023-05-29 09:58:46,132 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 09:58:46,132 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 09:58:46,133 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33323,1685354325554, state=OPEN 2023-05-29 09:58:46,139 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 09:58:46,139 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:58:46,141 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 09:58:46,141 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33323,1685354325554 in 197 msec 2023-05-29 09:58:46,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 09:58:46,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 356 msec 2023-05-29 09:58:46,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 402 msec 2023-05-29 09:58:46,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354326145, completionTime=-1 2023-05-29 09:58:46,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 09:58:46,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 09:58:46,148 DEBUG [hconnection-0x8ecc5ca-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:58:46,152 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:58:46,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 09:58:46,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354386153 2023-05-29 09:58:46,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354446153 2023-05-29 09:58:46,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-29 09:58:46,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1685354325517-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1685354325517-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1685354325517-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36093, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 09:58:46,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:58:46,160 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 09:58:46,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 09:58:46,161 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:58:46,162 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:58:46,165 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,165 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5 empty. 2023-05-29 09:58:46,166 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,166 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 09:58:46,180 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 09:58:46,181 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49d50a250e976229e4440bc35b7eaba5, NAME => 'hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp 2023-05-29 09:58:46,192 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:46,192 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 49d50a250e976229e4440bc35b7eaba5, disabling compactions & flushes 2023-05-29 09:58:46,192 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,192 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,193 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. after waiting 0 ms 2023-05-29 09:58:46,193 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,193 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,193 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 49d50a250e976229e4440bc35b7eaba5: 2023-05-29 09:58:46,195 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:58:46,196 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354326196"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354326196"}]},"ts":"1685354326196"} 2023-05-29 09:58:46,198 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:58:46,199 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:58:46,200 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354326199"}]},"ts":"1685354326199"} 2023-05-29 09:58:46,201 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 09:58:46,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=49d50a250e976229e4440bc35b7eaba5, ASSIGN}] 2023-05-29 09:58:46,210 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=49d50a250e976229e4440bc35b7eaba5, ASSIGN 2023-05-29 09:58:46,211 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=49d50a250e976229e4440bc35b7eaba5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33323,1685354325554; forceNewPlan=false, retain=false 2023-05-29 09:58:46,362 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=49d50a250e976229e4440bc35b7eaba5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:46,362 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354326362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354326362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354326362"}]},"ts":"1685354326362"} 2023-05-29 09:58:46,364 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 49d50a250e976229e4440bc35b7eaba5, server=jenkins-hbase4.apache.org,33323,1685354325554}] 2023-05-29 09:58:46,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49d50a250e976229e4440bc35b7eaba5, NAME => 'hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:58:46,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:46,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,522 INFO [StoreOpener-49d50a250e976229e4440bc35b7eaba5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,523 DEBUG [StoreOpener-49d50a250e976229e4440bc35b7eaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/info 2023-05-29 09:58:46,523 DEBUG [StoreOpener-49d50a250e976229e4440bc35b7eaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/info 2023-05-29 09:58:46,524 INFO [StoreOpener-49d50a250e976229e4440bc35b7eaba5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49d50a250e976229e4440bc35b7eaba5 columnFamilyName info 2023-05-29 09:58:46,524 INFO [StoreOpener-49d50a250e976229e4440bc35b7eaba5-1] regionserver.HStore(310): Store=49d50a250e976229e4440bc35b7eaba5/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:46,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:58:46,529 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:58:46,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49d50a250e976229e4440bc35b7eaba5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=767039, jitterRate=-0.02466021478176117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:58:46,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49d50a250e976229e4440bc35b7eaba5: 2023-05-29 09:58:46,531 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5., pid=6, masterSystemTime=1685354326517 2023-05-29 09:58:46,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,534 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:46,534 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=49d50a250e976229e4440bc35b7eaba5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:46,534 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354326534"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354326534"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354326534"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354326534"}]},"ts":"1685354326534"} 2023-05-29 09:58:46,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 09:58:46,538 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 49d50a250e976229e4440bc35b7eaba5, server=jenkins-hbase4.apache.org,33323,1685354325554 in 172 msec 2023-05-29 09:58:46,540 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 09:58:46,540 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=49d50a250e976229e4440bc35b7eaba5, ASSIGN in 330 msec 2023-05-29 09:58:46,541 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:58:46,541 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354326541"}]},"ts":"1685354326541"} 2023-05-29 09:58:46,543 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 09:58:46,545 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:58:46,547 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 386 msec 2023-05-29 09:58:46,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 09:58:46,562 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:58:46,562 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:46,566 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 09:58:46,575 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:58:46,579 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-05-29 09:58:46,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 09:58:46,594 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:58:46,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-29 09:58:46,602 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 09:58:46,604 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.030sec 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1685354325517-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 09:58:46,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1685354325517-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 09:58:46,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 09:58:46,685 DEBUG [Listener at localhost/40607] zookeeper.ReadOnlyZKClient(139): Connect 0x65382357 to 127.0.0.1:55831 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:58:46,690 DEBUG [Listener at localhost/40607] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31226544, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:58:46,691 DEBUG [hconnection-0x272d4af3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:58:46,693 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:58:46,695 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:58:46,695 INFO [Listener at localhost/40607] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:58:46,698 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 09:58:46,698 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:58:46,699 INFO [Listener at localhost/40607] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 09:58:46,700 DEBUG [Listener at localhost/40607] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 09:58:46,703 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 09:58:46,704 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 09:58:46,704 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 09:58:46,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:58:46,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:58:46,707 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:58:46,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-29 09:58:46,708 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:58:46,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:58:46,712 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:46,712 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d empty. 2023-05-29 09:58:46,713 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:46,713 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-29 09:58:46,725 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 09:58:46,726 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => b47e9f0a771e1e4ec8fd9fa565b8905d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/.tmp 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing b47e9f0a771e1e4ec8fd9fa565b8905d, disabling compactions & flushes 2023-05-29 09:58:46,733 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. after waiting 0 ms 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:46,733 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:46,733 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:58:46,735 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:58:46,736 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685354326736"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354326736"}]},"ts":"1685354326736"} 2023-05-29 09:58:46,738 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:58:46,739 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:58:46,739 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354326739"}]},"ts":"1685354326739"} 2023-05-29 09:58:46,741 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-29 09:58:46,746 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=b47e9f0a771e1e4ec8fd9fa565b8905d, ASSIGN}] 2023-05-29 09:58:46,748 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=b47e9f0a771e1e4ec8fd9fa565b8905d, ASSIGN 2023-05-29 09:58:46,748 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=b47e9f0a771e1e4ec8fd9fa565b8905d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33323,1685354325554; forceNewPlan=false, retain=false 2023-05-29 09:58:46,900 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b47e9f0a771e1e4ec8fd9fa565b8905d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:46,900 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685354326899"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354326899"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354326899"}]},"ts":"1685354326899"} 2023-05-29 09:58:46,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure b47e9f0a771e1e4ec8fd9fa565b8905d, server=jenkins-hbase4.apache.org,33323,1685354325554}] 2023-05-29 09:58:47,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:47,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b47e9f0a771e1e4ec8fd9fa565b8905d, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:58:47,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:58:47,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,060 INFO [StoreOpener-b47e9f0a771e1e4ec8fd9fa565b8905d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,061 DEBUG [StoreOpener-b47e9f0a771e1e4ec8fd9fa565b8905d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info 2023-05-29 09:58:47,061 DEBUG [StoreOpener-b47e9f0a771e1e4ec8fd9fa565b8905d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info 2023-05-29 09:58:47,061 INFO [StoreOpener-b47e9f0a771e1e4ec8fd9fa565b8905d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b47e9f0a771e1e4ec8fd9fa565b8905d columnFamilyName info 2023-05-29 09:58:47,062 INFO [StoreOpener-b47e9f0a771e1e4ec8fd9fa565b8905d-1] regionserver.HStore(310): Store=b47e9f0a771e1e4ec8fd9fa565b8905d/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:58:47,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:58:47,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:58:47,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b47e9f0a771e1e4ec8fd9fa565b8905d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=765584, jitterRate=-0.02651005983352661}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:58:47,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:58:47,070 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d., pid=11, masterSystemTime=1685354327054 2023-05-29 09:58:47,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:47,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:47,072 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=b47e9f0a771e1e4ec8fd9fa565b8905d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:47,072 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1685354327072"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354327072"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354327072"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354327072"}]},"ts":"1685354327072"} 2023-05-29 09:58:47,076 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 09:58:47,076 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure b47e9f0a771e1e4ec8fd9fa565b8905d, server=jenkins-hbase4.apache.org,33323,1685354325554 in 172 msec 2023-05-29 09:58:47,079 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 09:58:47,079 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=b47e9f0a771e1e4ec8fd9fa565b8905d, ASSIGN in 330 msec 2023-05-29 09:58:47,080 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:58:47,080 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354327080"}]},"ts":"1685354327080"} 2023-05-29 09:58:47,081 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-29 09:58:47,085 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:58:47,086 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 381 msec 2023-05-29 09:58:51,683 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 09:58:51,816 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:58:56,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:58:56,710 INFO [Listener at localhost/40607] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-29 09:58:56,713 DEBUG [Listener at localhost/40607] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:58:56,713 DEBUG [Listener at localhost/40607] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:58:56,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 09:58:56,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-29 09:58:56,733 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-29 09:58:56,734 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:58:56,734 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-29 09:58:56,734 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-29 09:58:56,734 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,735 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 09:58:56,736 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:58:56,736 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,736 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:58:56,736 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:58:56,736 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,736 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 09:58:56,736 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 09:58:56,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 09:58:56,737 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 09:58:56,738 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-29 09:58:56,739 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-29 09:58:56,740 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-29 09:58:56,740 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:58:56,741 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-29 09:58:56,742 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 09:58:56,742 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 09:58:56,742 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:56,742 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. started... 2023-05-29 09:58:56,743 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 49d50a250e976229e4440bc35b7eaba5 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 09:58:56,753 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/.tmp/info/2b4702d38a1c42aebde24975977058f6 2023-05-29 09:58:56,761 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/.tmp/info/2b4702d38a1c42aebde24975977058f6 as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/info/2b4702d38a1c42aebde24975977058f6 2023-05-29 09:58:56,767 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/info/2b4702d38a1c42aebde24975977058f6, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 09:58:56,768 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 49d50a250e976229e4440bc35b7eaba5 in 25ms, sequenceid=6, compaction requested=false 2023-05-29 09:58:56,769 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 49d50a250e976229e4440bc35b7eaba5: 2023-05-29 09:58:56,769 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:58:56,769 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 09:58:56,769 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 09:58:56,769 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,769 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-29 09:58:56,769 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-29 09:58:56,771 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,771 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,771 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,771 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:58:56,771 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:58:56,771 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,771 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 09:58:56,771 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:58:56,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:58:56,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:58:56,773 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-29 09:58:56,773 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-29 09:58:56,773 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@68544670[Count = 0] remaining members to acquire global barrier 2023-05-29 09:58:56,774 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,775 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,775 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,775 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,775 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-29 09:58:56,775 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,775 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 09:58:56,775 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-29 09:58:56,775 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase4.apache.org,33323,1685354325554' in zk 2023-05-29 09:58:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-29 09:58:56,778 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:58:56,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,778 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:58:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:58:56,778 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-29 09:58:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:58:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:58:56,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:58:56,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase4.apache.org,33323,1685354325554': 2023-05-29 09:58:56,781 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-29 09:58:56,781 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-29 09:58:56,781 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 09:58:56,781 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 09:58:56,781 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-29 09:58:56,781 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 09:58:56,783 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,783 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,783 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,783 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:58:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:58:56,783 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,783 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:58:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:58:56,784 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,784 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:58:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:58:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:58:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,785 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:58:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-29 09:58:56,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,791 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,791 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:58:56,791 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 09:58:56,791 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:58:56,792 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:58:56,791 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-29 09:58:56,791 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:58:56,792 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:58:56,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-29 09:58:56,792 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,792 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:58:56,792 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-29 09:58:56,792 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:58:56,793 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:58:56,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 09:58:56,792 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-29 09:58:56,794 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-29 09:58:56,795 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 09:59:06,795 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 09:59:06,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 09:59:06,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 09:59:06,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,812 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:06,812 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:06,812 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 09:59:06,812 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 09:59:06,813 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,813 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,814 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,814 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:06,814 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:06,814 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:06,814 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,814 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,815 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,815 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 09:59:06,816 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:06,816 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 09:59:06,816 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 09:59:06,816 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 09:59:06,816 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:06,816 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. started... 2023-05-29 09:59:06,817 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing b47e9f0a771e1e4ec8fd9fa565b8905d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 09:59:06,830 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/edd089754755440295c1b763f3f06965 2023-05-29 09:59:06,837 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/edd089754755440295c1b763f3f06965 as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965 2023-05-29 09:59:06,842 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965, entries=1, sequenceid=5, filesize=5.8 K 2023-05-29 09:59:06,843 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b47e9f0a771e1e4ec8fd9fa565b8905d in 26ms, sequenceid=5, compaction requested=false 2023-05-29 09:59:06,844 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:06,844 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:06,844 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 09:59:06,844 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 09:59:06,844 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,844 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 09:59:06,844 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 09:59:06,846 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,846 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:06,847 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,847 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:06,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:06,848 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 09:59:06,848 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6c6bdfa2[Count = 0] remaining members to acquire global barrier 2023-05-29 09:59:06,848 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 09:59:06,848 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,849 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,849 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,849 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,849 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 09:59:06,849 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 09:59:06,849 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,850 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 09:59:06,850 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33323,1685354325554' in zk 2023-05-29 09:59:06,851 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,851 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 09:59:06,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:06,851 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:06,851 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:06,851 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 09:59:06,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:06,852 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:06,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:06,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,853 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,854 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33323,1685354325554': 2023-05-29 09:59:06,854 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 09:59:06,854 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 09:59:06,854 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 09:59:06,854 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 09:59:06,854 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,854 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 09:59:06,859 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,859 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,860 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:06,860 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:06,860 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:06,860 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,860 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:06,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,861 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:06,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,862 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,865 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:06,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:06,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 09:59:06,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:06,865 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,865 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:06,866 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,866 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,866 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:06,866 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 09:59:06,866 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:06,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:06,866 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 09:59:16,866 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 09:59:16,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 09:59:16,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 09:59:16,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 09:59:16,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,877 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:16,878 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:16,878 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 09:59:16,878 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 09:59:16,878 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,878 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,880 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:16,880 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,880 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:16,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:16,880 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,880 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 09:59:16,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,881 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 09:59:16,881 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,881 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,881 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 09:59:16,881 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,881 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 09:59:16,881 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:16,881 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 09:59:16,882 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 09:59:16,882 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 09:59:16,882 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:16,882 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. started... 2023-05-29 09:59:16,882 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing b47e9f0a771e1e4ec8fd9fa565b8905d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 09:59:16,892 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/fd2a859eca8f47248b562e59e7a00a7e 2023-05-29 09:59:16,899 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/fd2a859eca8f47248b562e59e7a00a7e as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e 2023-05-29 09:59:16,904 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e, entries=1, sequenceid=9, filesize=5.8 K 2023-05-29 09:59:16,905 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b47e9f0a771e1e4ec8fd9fa565b8905d in 23ms, sequenceid=9, compaction requested=false 2023-05-29 09:59:16,905 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:16,905 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:16,905 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 09:59:16,905 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 09:59:16,905 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,905 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 09:59:16,905 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 09:59:16,907 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,907 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:16,908 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,908 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:16,908 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,909 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,909 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:16,909 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 09:59:16,909 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@29694c4b[Count = 0] remaining members to acquire global barrier 2023-05-29 09:59:16,909 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 09:59:16,909 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,910 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,910 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,910 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,910 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 09:59:16,910 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 09:59:16,911 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,911 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 09:59:16,911 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33323,1685354325554' in zk 2023-05-29 09:59:16,913 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,913 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 09:59:16,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:16,913 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:16,913 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:16,913 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 09:59:16,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:16,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:16,914 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:16,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,915 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,916 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33323,1685354325554': 2023-05-29 09:59:16,916 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 09:59:16,916 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 09:59:16,916 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 09:59:16,916 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 09:59:16,916 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,916 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 09:59:16,918 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,918 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,918 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:16,918 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,918 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:16,918 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:16,918 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:16,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:16,919 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,919 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:16,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,920 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,923 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:16,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:16,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:16,923 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:16,923 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:16,923 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,924 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 09:59:16,924 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 09:59:16,924 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,924 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:16,924 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:16,924 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:26,924 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 09:59:26,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 09:59:26,937 INFO [Listener at localhost/40607] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354325954 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354366927 2023-05-29 09:59:26,938 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK], DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK]] 2023-05-29 09:59:26,938 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(716): hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354325954 is not closed yet, will try archiving it next time 2023-05-29 09:59:26,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 09:59:26,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 09:59:26,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,946 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:26,946 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:26,946 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 09:59:26,946 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 09:59:26,947 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,947 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,948 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,948 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:26,948 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:26,948 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:26,948 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,948 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 09:59:26,948 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,949 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,949 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 09:59:26,949 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,949 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,950 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 09:59:26,950 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,951 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 09:59:26,951 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:26,951 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 09:59:26,951 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 09:59:26,951 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 09:59:26,951 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:26,951 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. started... 2023-05-29 09:59:26,951 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing b47e9f0a771e1e4ec8fd9fa565b8905d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 09:59:26,962 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/57a6c64091634f75aca130dfc0755a4c 2023-05-29 09:59:26,968 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/57a6c64091634f75aca130dfc0755a4c as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c 2023-05-29 09:59:26,973 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c, entries=1, sequenceid=13, filesize=5.8 K 2023-05-29 09:59:26,974 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b47e9f0a771e1e4ec8fd9fa565b8905d in 23ms, sequenceid=13, compaction requested=true 2023-05-29 09:59:26,974 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:26,974 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:26,974 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 09:59:26,974 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 09:59:26,974 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,974 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 09:59:26,974 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 09:59:26,976 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,976 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:26,976 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:26,976 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,977 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 09:59:26,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:26,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:26,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,977 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,978 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:26,978 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 09:59:26,978 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@52a74890[Count = 0] remaining members to acquire global barrier 2023-05-29 09:59:26,978 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 09:59:26,978 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,979 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,979 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,979 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,979 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 09:59:26,979 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 09:59:26,979 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33323,1685354325554' in zk 2023-05-29 09:59:26,979 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,980 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 09:59:26,982 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 09:59:26,982 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,982 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:26,982 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:26,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:26,983 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 09:59:26,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:26,983 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:26,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:26,984 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,985 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33323,1685354325554': 2023-05-29 09:59:26,985 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 09:59:26,985 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 09:59:26,985 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 09:59:26,985 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 09:59:26,985 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,985 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 09:59:26,987 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,987 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,987 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:26,987 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:26,987 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:26,987 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,987 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,988 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,988 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:26,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,991 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,991 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:26,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,992 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,994 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,994 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,994 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:26,994 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,994 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:26,995 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:26,994 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:26,995 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,995 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:26,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 09:59:26,995 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:26,995 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 09:59:26,995 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,995 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:26,995 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 09:59:26,996 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:26,996 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:26,996 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 09:59:36,996 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 09:59:36,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 09:59:36,997 DEBUG [Listener at localhost/40607] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 09:59:37,002 DEBUG [Listener at localhost/40607] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 09:59:37,002 DEBUG [Listener at localhost/40607] regionserver.HStore(1912): b47e9f0a771e1e4ec8fd9fa565b8905d/info is initiating minor compaction (all files) 2023-05-29 09:59:37,002 INFO [Listener at localhost/40607] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:59:37,002 INFO [Listener at localhost/40607] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:37,002 INFO [Listener at localhost/40607] regionserver.HRegion(2259): Starting compaction of b47e9f0a771e1e4ec8fd9fa565b8905d/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:37,002 INFO [Listener at localhost/40607] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c] into tmpdir=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp, totalSize=17.4 K 2023-05-29 09:59:37,003 DEBUG [Listener at localhost/40607] compactions.Compactor(207): Compacting edd089754755440295c1b763f3f06965, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1685354346805 2023-05-29 09:59:37,003 DEBUG [Listener at localhost/40607] compactions.Compactor(207): Compacting fd2a859eca8f47248b562e59e7a00a7e, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1685354356868 2023-05-29 09:59:37,004 DEBUG [Listener at localhost/40607] compactions.Compactor(207): Compacting 57a6c64091634f75aca130dfc0755a4c, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1685354366926 2023-05-29 09:59:37,017 INFO [Listener at localhost/40607] throttle.PressureAwareThroughputController(145): b47e9f0a771e1e4ec8fd9fa565b8905d#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 09:59:37,031 DEBUG [Listener at localhost/40607] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/fae353dc63ca4eb9a61fb9732b36032b as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fae353dc63ca4eb9a61fb9732b36032b 2023-05-29 09:59:37,037 INFO [Listener at localhost/40607] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in b47e9f0a771e1e4ec8fd9fa565b8905d/info of b47e9f0a771e1e4ec8fd9fa565b8905d into fae353dc63ca4eb9a61fb9732b36032b(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 09:59:37,037 DEBUG [Listener at localhost/40607] regionserver.HRegion(2289): Compaction status journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:37,051 INFO [Listener at localhost/40607] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354366927 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354377039 2023-05-29 09:59:37,052 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK], DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK]] 2023-05-29 09:59:37,052 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(716): hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354366927 is not closed yet, will try archiving it next time 2023-05-29 09:59:37,052 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354325954 to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354325954 2023-05-29 09:59:37,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(933): Client=jenkins//172.31.14.131 procedure request for: flush-table-proc 2023-05-29 09:59:37,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-29 09:59:37,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,059 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:37,060 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:37,060 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-29 09:59:37,060 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-29 09:59:37,061 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,061 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,067 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,067 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:37,067 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:37,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:37,067 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,067 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-29 09:59:37,067 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,068 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,068 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-29 09:59:37,068 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,068 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,070 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-29 09:59:37,070 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,070 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-29 09:59:37,070 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-29 09:59:37,071 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-29 09:59:37,071 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-29 09:59:37,071 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-29 09:59:37,071 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:37,071 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. started... 2023-05-29 09:59:37,071 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing b47e9f0a771e1e4ec8fd9fa565b8905d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 09:59:37,098 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/20ca049e9cb24462b8aa166c6274dd90 2023-05-29 09:59:37,104 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/20ca049e9cb24462b8aa166c6274dd90 as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/20ca049e9cb24462b8aa166c6274dd90 2023-05-29 09:59:37,108 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/20ca049e9cb24462b8aa166c6274dd90, entries=1, sequenceid=18, filesize=5.8 K 2023-05-29 09:59:37,109 INFO [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b47e9f0a771e1e4ec8fd9fa565b8905d in 38ms, sequenceid=18, compaction requested=false 2023-05-29 09:59:37,109 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:37,109 DEBUG [rs(jenkins-hbase4.apache.org,33323,1685354325554)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:37,110 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-29 09:59:37,110 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-29 09:59:37,110 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,110 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-29 09:59:37,110 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-29 09:59:37,112 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,112 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,112 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,112 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:37,112 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:37,112 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,112 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-29 09:59:37,112 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:37,113 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:37,113 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,113 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,113 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:37,114 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,33323,1685354325554' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-29 09:59:37,114 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@188ae372[Count = 0] remaining members to acquire global barrier 2023-05-29 09:59:37,114 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-29 09:59:37,114 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,115 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,115 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,115 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,115 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-29 09:59:37,115 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-29 09:59:37,115 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase4.apache.org,33323,1685354325554' in zk 2023-05-29 09:59:37,115 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,115 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-29 09:59:37,117 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-29 09:59:37,117 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,117 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:37,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:37,117 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:37,117 DEBUG [member: 'jenkins-hbase4.apache.org,33323,1685354325554' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-29 09:59:37,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:37,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:37,118 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,119 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,119 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:37,119 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,120 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,120 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase4.apache.org,33323,1685354325554': 2023-05-29 09:59:37,120 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,33323,1685354325554' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-29 09:59:37,120 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-29 09:59:37,120 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-29 09:59:37,120 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-29 09:59:37,120 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,120 INFO [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-29 09:59:37,123 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,123 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-29 09:59:37,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-29 09:59:37,123 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,123 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,123 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:37,123 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:37,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-29 09:59:37,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:37,124 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,124 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-29 09:59:37,125 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,125 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,126 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-29 09:59:37,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,127 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,129 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,129 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-29 09:59:37,129 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,129 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-29 09:59:37,130 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:37,129 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-29 09:59:37,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-29 09:59:37,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-29 09:59:37,129 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,129 DEBUG [(jenkins-hbase4.apache.org,36093,1685354325517)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-29 09:59:37,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-29 09:59:37,130 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:37,130 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-29 09:59:37,131 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,130 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-29 09:59:37,131 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:37,131 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-29 09:59:37,131 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:37,131 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-29 09:59:47,131 DEBUG [Listener at localhost/40607] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-29 09:59:47,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36093] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-29 09:59:47,142 INFO [Listener at localhost/40607] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354377039 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354387134 2023-05-29 09:59:47,142 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41387,DS-a455a67d-83ec-4415-bc54-b604e0ffe74f,DISK], DatanodeInfoWithStorage[127.0.0.1:34275,DS-423065d6-142a-4e49-925a-6954049ab4d3,DISK]] 2023-05-29 09:59:47,142 DEBUG [Listener at localhost/40607] wal.AbstractFSWAL(716): hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354377039 is not closed yet, will try archiving it next time 2023-05-29 09:59:47,142 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354366927 to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs/jenkins-hbase4.apache.org%2C33323%2C1685354325554.1685354366927 2023-05-29 09:59:47,142 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 09:59:47,143 INFO [Listener at localhost/40607] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 09:59:47,143 DEBUG [Listener at localhost/40607] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65382357 to 127.0.0.1:55831 2023-05-29 09:59:47,143 DEBUG [Listener at localhost/40607] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:59:47,143 DEBUG [Listener at localhost/40607] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 09:59:47,143 DEBUG [Listener at localhost/40607] util.JVMClusterUtil(257): Found active master hash=1496322615, stopped=false 2023-05-29 09:59:47,143 INFO [Listener at localhost/40607] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:59:47,146 INFO [Listener at localhost/40607] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 09:59:47,146 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:59:47,146 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 09:59:47,146 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:47,146 DEBUG [Listener at localhost/40607] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x51fb89aa to 127.0.0.1:55831 2023-05-29 09:59:47,147 DEBUG [Listener at localhost/40607] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:59:47,147 INFO [Listener at localhost/40607] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33323,1685354325554' ***** 2023-05-29 09:59:47,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:59:47,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:59:47,147 INFO [Listener at localhost/40607] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 09:59:47,147 INFO [RS:0;jenkins-hbase4:33323] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 09:59:47,148 INFO [RS:0;jenkins-hbase4:33323] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 09:59:47,148 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 09:59:47,148 INFO [RS:0;jenkins-hbase4:33323] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 09:59:47,148 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(3303): Received CLOSE for b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:59:47,148 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(3303): Received CLOSE for 49d50a250e976229e4440bc35b7eaba5 2023-05-29 09:59:47,148 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:47,148 DEBUG [RS:0;jenkins-hbase4:33323] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1452b546 to 127.0.0.1:55831 2023-05-29 09:59:47,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b47e9f0a771e1e4ec8fd9fa565b8905d, disabling compactions & flushes 2023-05-29 09:59:47,149 DEBUG [RS:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:59:47,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:47,149 INFO [RS:0;jenkins-hbase4:33323] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 09:59:47,149 INFO [RS:0;jenkins-hbase4:33323] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 09:59:47,149 INFO [RS:0;jenkins-hbase4:33323] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 09:59:47,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:47,149 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 09:59:47,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. after waiting 0 ms 2023-05-29 09:59:47,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:47,149 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-29 09:59:47,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b47e9f0a771e1e4ec8fd9fa565b8905d 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-29 09:59:47,149 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1478): Online Regions={b47e9f0a771e1e4ec8fd9fa565b8905d=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d., 1588230740=hbase:meta,,1.1588230740, 49d50a250e976229e4440bc35b7eaba5=hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5.} 2023-05-29 09:59:47,149 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:59:47,149 DEBUG [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1504): Waiting on 1588230740, 49d50a250e976229e4440bc35b7eaba5, b47e9f0a771e1e4ec8fd9fa565b8905d 2023-05-29 09:59:47,149 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:59:47,149 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:59:47,150 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:59:47,150 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:59:47,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-29 09:59:47,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/8f94b3eaaa2b4730ab8b3ca7c0cca560 2023-05-29 09:59:47,175 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.84 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/.tmp/info/60b0ed34703448a9a1743414c7e8c333 2023-05-29 09:59:47,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/.tmp/info/8f94b3eaaa2b4730ab8b3ca7c0cca560 as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/8f94b3eaaa2b4730ab8b3ca7c0cca560 2023-05-29 09:59:47,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/8f94b3eaaa2b4730ab8b3ca7c0cca560, entries=1, sequenceid=22, filesize=5.8 K 2023-05-29 09:59:47,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for b47e9f0a771e1e4ec8fd9fa565b8905d in 38ms, sequenceid=22, compaction requested=true 2023-05-29 09:59:47,194 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c] to archive 2023-05-29 09:59:47,195 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 09:59:47,198 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965 to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/edd089754755440295c1b763f3f06965 2023-05-29 09:59:47,198 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/.tmp/table/bec76a47cf5044f6bb22ce4a7b284d0b 2023-05-29 09:59:47,199 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/fd2a859eca8f47248b562e59e7a00a7e 2023-05-29 09:59:47,201 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c to hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/info/57a6c64091634f75aca130dfc0755a4c 2023-05-29 09:59:47,209 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/.tmp/info/60b0ed34703448a9a1743414c7e8c333 as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/info/60b0ed34703448a9a1743414c7e8c333 2023-05-29 09:59:47,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/b47e9f0a771e1e4ec8fd9fa565b8905d/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-29 09:59:47,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:47,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b47e9f0a771e1e4ec8fd9fa565b8905d: 2023-05-29 09:59:47,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1685354326704.b47e9f0a771e1e4ec8fd9fa565b8905d. 2023-05-29 09:59:47,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49d50a250e976229e4440bc35b7eaba5, disabling compactions & flushes 2023-05-29 09:59:47,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:59:47,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:59:47,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. after waiting 0 ms 2023-05-29 09:59:47,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:59:47,219 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/info/60b0ed34703448a9a1743414c7e8c333, entries=20, sequenceid=14, filesize=7.6 K 2023-05-29 09:59:47,220 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/.tmp/table/bec76a47cf5044f6bb22ce4a7b284d0b as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/table/bec76a47cf5044f6bb22ce4a7b284d0b 2023-05-29 09:59:47,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/namespace/49d50a250e976229e4440bc35b7eaba5/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 09:59:47,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:59:47,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49d50a250e976229e4440bc35b7eaba5: 2023-05-29 09:59:47,224 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685354326159.49d50a250e976229e4440bc35b7eaba5. 2023-05-29 09:59:47,226 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/table/bec76a47cf5044f6bb22ce4a7b284d0b, entries=4, sequenceid=14, filesize=4.9 K 2023-05-29 09:59:47,227 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3174, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 77ms, sequenceid=14, compaction requested=false 2023-05-29 09:59:47,232 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-29 09:59:47,233 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 09:59:47,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:59:47,233 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:59:47,233 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 09:59:47,349 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33323,1685354325554; all regions closed. 2023-05-29 09:59:47,350 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:47,357 DEBUG [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs 2023-05-29 09:59:47,357 INFO [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33323%2C1685354325554.meta:.meta(num 1685354326105) 2023-05-29 09:59:47,357 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/WALs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:47,363 DEBUG [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/oldWALs 2023-05-29 09:59:47,363 INFO [RS:0;jenkins-hbase4:33323] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33323%2C1685354325554:(num 1685354387134) 2023-05-29 09:59:47,363 DEBUG [RS:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:59:47,363 INFO [RS:0;jenkins-hbase4:33323] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:59:47,363 INFO [RS:0;jenkins-hbase4:33323] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 09:59:47,363 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:59:47,364 INFO [RS:0;jenkins-hbase4:33323] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33323 2023-05-29 09:59:47,367 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33323,1685354325554 2023-05-29 09:59:47,367 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:59:47,367 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:59:47,368 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33323,1685354325554] 2023-05-29 09:59:47,368 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33323,1685354325554; numProcessing=1 2023-05-29 09:59:47,369 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33323,1685354325554 already deleted, retry=false 2023-05-29 09:59:47,369 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33323,1685354325554 expired; onlineServers=0 2023-05-29 09:59:47,369 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,36093,1685354325517' ***** 2023-05-29 09:59:47,369 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 09:59:47,370 DEBUG [M:0;jenkins-hbase4:36093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d0f8dc9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:59:47,370 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:59:47,370 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36093,1685354325517; all regions closed. 2023-05-29 09:59:47,370 DEBUG [M:0;jenkins-hbase4:36093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 09:59:47,370 DEBUG [M:0;jenkins-hbase4:36093] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 09:59:47,370 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 09:59:47,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354325749] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354325749,5,FailOnTimeoutGroup] 2023-05-29 09:59:47,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354325749] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354325749,5,FailOnTimeoutGroup] 2023-05-29 09:59:47,370 DEBUG [M:0;jenkins-hbase4:36093] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 09:59:47,371 INFO [M:0;jenkins-hbase4:36093] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 09:59:47,372 INFO [M:0;jenkins-hbase4:36093] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 09:59:47,372 INFO [M:0;jenkins-hbase4:36093] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 09:59:47,372 DEBUG [M:0;jenkins-hbase4:36093] master.HMaster(1512): Stopping service threads 2023-05-29 09:59:47,372 INFO [M:0;jenkins-hbase4:36093] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 09:59:47,372 ERROR [M:0;jenkins-hbase4:36093] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 09:59:47,372 INFO [M:0;jenkins-hbase4:36093] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 09:59:47,372 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 09:59:47,373 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 09:59:47,373 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:47,373 DEBUG [M:0;jenkins-hbase4:36093] zookeeper.ZKUtil(398): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 09:59:47,373 WARN [M:0;jenkins-hbase4:36093] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 09:59:47,373 INFO [M:0;jenkins-hbase4:36093] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 09:59:47,373 INFO [M:0;jenkins-hbase4:36093] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 09:59:47,373 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:59:47,374 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:59:47,374 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:47,374 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:47,374 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:59:47,374 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:47,374 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.89 KB heapSize=47.33 KB 2023-05-29 09:59:47,387 INFO [M:0;jenkins-hbase4:36093] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0f5066301639459bb58055317fb698aa 2023-05-29 09:59:47,392 INFO [M:0;jenkins-hbase4:36093] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f5066301639459bb58055317fb698aa 2023-05-29 09:59:47,393 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0f5066301639459bb58055317fb698aa as hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0f5066301639459bb58055317fb698aa 2023-05-29 09:59:47,399 INFO [M:0;jenkins-hbase4:36093] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f5066301639459bb58055317fb698aa 2023-05-29 09:59:47,399 INFO [M:0;jenkins-hbase4:36093] regionserver.HStore(1080): Added hdfs://localhost:40249/user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0f5066301639459bb58055317fb698aa, entries=11, sequenceid=100, filesize=6.1 K 2023-05-29 09:59:47,400 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegion(2948): Finished flush of dataSize ~38.89 KB/39824, heapSize ~47.31 KB/48448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=100, compaction requested=false 2023-05-29 09:59:47,401 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:47,401 DEBUG [M:0;jenkins-hbase4:36093] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:59:47,401 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/cf7c3412-26f7-612d-72d5-9ad1a9443321/MasterData/WALs/jenkins-hbase4.apache.org,36093,1685354325517 2023-05-29 09:59:47,404 INFO [M:0;jenkins-hbase4:36093] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 09:59:47,404 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 09:59:47,405 INFO [M:0;jenkins-hbase4:36093] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36093 2023-05-29 09:59:47,406 DEBUG [M:0;jenkins-hbase4:36093] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36093,1685354325517 already deleted, retry=false 2023-05-29 09:59:47,468 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:59:47,468 INFO [RS:0;jenkins-hbase4:33323] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33323,1685354325554; zookeeper connection closed. 2023-05-29 09:59:47,468 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): regionserver:33323-0x1007660a8ff0001, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:59:47,469 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@8b31a1e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@8b31a1e 2023-05-29 09:59:47,469 INFO [Listener at localhost/40607] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 09:59:47,568 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:59:47,569 INFO [M:0;jenkins-hbase4:36093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36093,1685354325517; zookeeper connection closed. 2023-05-29 09:59:47,569 DEBUG [Listener at localhost/40607-EventThread] zookeeper.ZKWatcher(600): master:36093-0x1007660a8ff0000, quorum=127.0.0.1:55831, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 09:59:47,569 WARN [Listener at localhost/40607] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:59:47,573 INFO [Listener at localhost/40607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:59:47,678 WARN [BP-2061522236-172.31.14.131-1685354324935 heartbeating to localhost/127.0.0.1:40249] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:59:47,678 WARN [BP-2061522236-172.31.14.131-1685354324935 heartbeating to localhost/127.0.0.1:40249] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2061522236-172.31.14.131-1685354324935 (Datanode Uuid 1d4617ca-7744-474b-b3c8-34399e37fd04) service to localhost/127.0.0.1:40249 2023-05-29 09:59:47,679 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/dfs/data/data3/current/BP-2061522236-172.31.14.131-1685354324935] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:59:47,679 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/dfs/data/data4/current/BP-2061522236-172.31.14.131-1685354324935] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:59:47,680 WARN [Listener at localhost/40607] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 09:59:47,684 INFO [Listener at localhost/40607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:59:47,788 WARN [BP-2061522236-172.31.14.131-1685354324935 heartbeating to localhost/127.0.0.1:40249] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 09:59:47,788 WARN [BP-2061522236-172.31.14.131-1685354324935 heartbeating to localhost/127.0.0.1:40249] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2061522236-172.31.14.131-1685354324935 (Datanode Uuid 6e3fcf71-6658-418b-a66f-9a7914b851f6) service to localhost/127.0.0.1:40249 2023-05-29 09:59:47,788 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/dfs/data/data1/current/BP-2061522236-172.31.14.131-1685354324935] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:59:47,789 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/cluster_9a8b61ea-c527-7622-7915-f83009d71524/dfs/data/data2/current/BP-2061522236-172.31.14.131-1685354324935] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 09:59:47,801 INFO [Listener at localhost/40607] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 09:59:47,824 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 09:59:47,914 INFO [Listener at localhost/40607] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 09:59:47,936 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 09:59:47,946 INFO [Listener at localhost/40607] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 85) - Thread LEAK? -, OpenFileDescriptor=505 (was 463) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=39 (was 37) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 168), AvailableMemoryMB=2880 (was 3178) 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=505, MaxFileDescriptor=60000, SystemLoadAverage=39, ProcessCount=168, AvailableMemoryMB=2880 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/hadoop.log.dir so I do NOT create it in target/test-data/096b0724-dba8-f296-3a17-2e90c296b495 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/94261f9b-9bbc-a185-002d-e4fe011027f0/hadoop.tmp.dir so I do NOT create it in target/test-data/096b0724-dba8-f296-3a17-2e90c296b495 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c, deleteOnExit=true 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/test.cache.data in system properties and HBase conf 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 09:59:47,955 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/hadoop.log.dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 09:59:47,956 DEBUG [Listener at localhost/40607] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 09:59:47,956 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/nfs.dump.dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/java.io.tmpdir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 09:59:47,957 INFO [Listener at localhost/40607] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 09:59:47,959 WARN [Listener at localhost/40607] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:59:47,961 WARN [Listener at localhost/40607] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:59:47,962 WARN [Listener at localhost/40607] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:59:48,006 WARN [Listener at localhost/40607] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:59:48,008 INFO [Listener at localhost/40607] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:59:48,013 INFO [Listener at localhost/40607] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/java.io.tmpdir/Jetty_localhost_41009_hdfs____.98uwfu/webapp 2023-05-29 09:59:48,106 INFO [Listener at localhost/40607] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41009 2023-05-29 09:59:48,108 WARN [Listener at localhost/40607] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 09:59:48,112 WARN [Listener at localhost/40607] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 09:59:48,112 WARN [Listener at localhost/40607] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 09:59:48,159 WARN [Listener at localhost/43865] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:59:48,168 WARN [Listener at localhost/43865] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:59:48,170 WARN [Listener at localhost/43865] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:59:48,171 INFO [Listener at localhost/43865] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:59:48,176 INFO [Listener at localhost/43865] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/java.io.tmpdir/Jetty_localhost_44429_datanode____ip9lcy/webapp 2023-05-29 09:59:48,266 INFO [Listener at localhost/43865] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44429 2023-05-29 09:59:48,271 WARN [Listener at localhost/42957] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:59:48,283 WARN [Listener at localhost/42957] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 09:59:48,285 WARN [Listener at localhost/42957] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 09:59:48,286 INFO [Listener at localhost/42957] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 09:59:48,289 INFO [Listener at localhost/42957] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/java.io.tmpdir/Jetty_localhost_34527_datanode____uygtr4/webapp 2023-05-29 09:59:48,363 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5ca6f0ff69f55afd: Processing first storage report for DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514 from datanode 4d18200f-2f86-4b2f-839e-f664935e38cd 2023-05-29 09:59:48,363 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5ca6f0ff69f55afd: from storage DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514 node DatanodeRegistration(127.0.0.1:35609, datanodeUuid=4d18200f-2f86-4b2f-839e-f664935e38cd, infoPort=33031, infoSecurePort=0, ipcPort=42957, storageInfo=lv=-57;cid=testClusterID;nsid=830969267;c=1685354387964), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:59:48,363 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5ca6f0ff69f55afd: Processing first storage report for DS-6c1bcad3-b389-424c-8c5a-553b46ca4eb5 from datanode 4d18200f-2f86-4b2f-839e-f664935e38cd 2023-05-29 09:59:48,363 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5ca6f0ff69f55afd: from storage DS-6c1bcad3-b389-424c-8c5a-553b46ca4eb5 node DatanodeRegistration(127.0.0.1:35609, datanodeUuid=4d18200f-2f86-4b2f-839e-f664935e38cd, infoPort=33031, infoSecurePort=0, ipcPort=42957, storageInfo=lv=-57;cid=testClusterID;nsid=830969267;c=1685354387964), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:59:48,387 INFO [Listener at localhost/42957] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34527 2023-05-29 09:59:48,394 WARN [Listener at localhost/32845] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 09:59:48,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x716a5fd63df3ce97: Processing first storage report for DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5 from datanode d4f6772f-5ec4-4b60-9bad-b74e88ae2bcd 2023-05-29 09:59:48,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x716a5fd63df3ce97: from storage DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5 node DatanodeRegistration(127.0.0.1:41541, datanodeUuid=d4f6772f-5ec4-4b60-9bad-b74e88ae2bcd, infoPort=32799, infoSecurePort=0, ipcPort=32845, storageInfo=lv=-57;cid=testClusterID;nsid=830969267;c=1685354387964), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:59:48,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x716a5fd63df3ce97: Processing first storage report for DS-99579344-cb3d-4b95-89fe-9538a5a1fa95 from datanode d4f6772f-5ec4-4b60-9bad-b74e88ae2bcd 2023-05-29 09:59:48,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x716a5fd63df3ce97: from storage DS-99579344-cb3d-4b95-89fe-9538a5a1fa95 node DatanodeRegistration(127.0.0.1:41541, datanodeUuid=d4f6772f-5ec4-4b60-9bad-b74e88ae2bcd, infoPort=32799, infoSecurePort=0, ipcPort=32845, storageInfo=lv=-57;cid=testClusterID;nsid=830969267;c=1685354387964), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 09:59:48,500 DEBUG [Listener at localhost/32845] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495 2023-05-29 09:59:48,503 INFO [Listener at localhost/32845] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/zookeeper_0, clientPort=59759, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 09:59:48,504 INFO [Listener at localhost/32845] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59759 2023-05-29 09:59:48,504 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,505 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,518 INFO [Listener at localhost/32845] util.FSUtils(471): Created version file at hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1 with version=8 2023-05-29 09:59:48,518 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 09:59:48,519 INFO [Listener at localhost/32845] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:59:48,520 INFO [Listener at localhost/32845] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:59:48,521 INFO [Listener at localhost/32845] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37125 2023-05-29 09:59:48,521 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,522 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,523 INFO [Listener at localhost/32845] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37125 connecting to ZooKeeper ensemble=127.0.0.1:59759 2023-05-29 09:59:48,530 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:371250x0, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:59:48,530 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37125-0x10076619f190000 connected 2023-05-29 09:59:48,543 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:59:48,543 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:59:48,544 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:59:48,544 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37125 2023-05-29 09:59:48,544 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37125 2023-05-29 09:59:48,544 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37125 2023-05-29 09:59:48,545 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37125 2023-05-29 09:59:48,545 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37125 2023-05-29 09:59:48,545 INFO [Listener at localhost/32845] master.HMaster(444): hbase.rootdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1, hbase.cluster.distributed=false 2023-05-29 09:59:48,557 INFO [Listener at localhost/32845] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 09:59:48,558 INFO [Listener at localhost/32845] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 09:59:48,559 INFO [Listener at localhost/32845] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37289 2023-05-29 09:59:48,559 INFO [Listener at localhost/32845] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 09:59:48,560 DEBUG [Listener at localhost/32845] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 09:59:48,560 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,561 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,562 INFO [Listener at localhost/32845] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37289 connecting to ZooKeeper ensemble=127.0.0.1:59759 2023-05-29 09:59:48,565 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:372890x0, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 09:59:48,566 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37289-0x10076619f190001 connected 2023-05-29 09:59:48,566 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 09:59:48,566 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 09:59:48,567 DEBUG [Listener at localhost/32845] zookeeper.ZKUtil(164): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 09:59:48,567 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37289 2023-05-29 09:59:48,567 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37289 2023-05-29 09:59:48,567 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37289 2023-05-29 09:59:48,570 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37289 2023-05-29 09:59:48,572 DEBUG [Listener at localhost/32845] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37289 2023-05-29 09:59:48,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,574 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:59:48,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,577 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:59:48,577 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 09:59:48,577 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:59:48,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37125,1685354388519 from backup master directory 2023-05-29 09:59:48,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 09:59:48,579 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,579 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 09:59:48,579 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:59:48,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/hbase.id with ID: 36392e6b-e8a7-42cd-95e6-88b9a9826497 2023-05-29 09:59:48,605 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:48,608 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x052e3ec9 to 127.0.0.1:59759 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:59:48,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7437d506, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:59:48,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:59:48,621 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 09:59:48,621 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:59:48,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store-tmp 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 09:59:48,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:48,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 09:59:48,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:59:48,632 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/WALs/jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37125%2C1685354388519, suffix=, logDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/WALs/jenkins-hbase4.apache.org,37125,1685354388519, archiveDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/oldWALs, maxLogs=10 2023-05-29 09:59:48,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/WALs/jenkins-hbase4.apache.org,37125,1685354388519/jenkins-hbase4.apache.org%2C37125%2C1685354388519.1685354388634 2023-05-29 09:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41541,DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35609,DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514,DISK]] 2023-05-29 09:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,643 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,644 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 09:59:48,645 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 09:59:48,645 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:48,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 09:59:48,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:59:48,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=782812, jitterRate=-0.004603922367095947}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:59:48,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 09:59:48,654 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 09:59:48,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 09:59:48,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 09:59:48,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 09:59:48,656 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 09:59:48,656 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 09:59:48,656 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 09:59:48,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 09:59:48,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 09:59:48,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 09:59:48,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 09:59:48,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 09:59:48,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 09:59:48,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 09:59:48,675 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 09:59:48,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 09:59:48,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 09:59:48,677 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:59:48,677 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 09:59:48,677 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37125,1685354388519, sessionid=0x10076619f190000, setting cluster-up flag (Was=false) 2023-05-29 09:59:48,682 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 09:59:48,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,690 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 09:59:48,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:48,695 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.hbase-snapshot/.tmp 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:59:48,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,698 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354418698 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 09:59:48,699 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 09:59:48,699 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 09:59:48,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 09:59:48,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354388700,5,FailOnTimeoutGroup] 2023-05-29 09:59:48,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354388700,5,FailOnTimeoutGroup] 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,701 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:59:48,709 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:59:48,710 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 09:59:48,710 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1 2023-05-29 09:59:48,717 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:48,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:59:48,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info 2023-05-29 09:59:48,719 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:59:48,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:48,720 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:59:48,721 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:59:48,721 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:59:48,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:48,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:59:48,723 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/table 2023-05-29 09:59:48,723 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:59:48,723 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:48,724 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740 2023-05-29 09:59:48,724 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740 2023-05-29 09:59:48,726 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:59:48,727 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:59:48,729 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:59:48,729 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835085, jitterRate=0.06186637282371521}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:59:48,729 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:59:48,730 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 09:59:48,730 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 09:59:48,730 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 09:59:48,730 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 09:59:48,730 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 09:59:48,731 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 09:59:48,731 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 09:59:48,732 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 09:59:48,732 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 09:59:48,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 09:59:48,733 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 09:59:48,734 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 09:59:48,774 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(951): ClusterId : 36392e6b-e8a7-42cd-95e6-88b9a9826497 2023-05-29 09:59:48,774 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 09:59:48,777 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 09:59:48,778 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 09:59:48,779 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 09:59:48,780 DEBUG [RS:0;jenkins-hbase4:37289] zookeeper.ReadOnlyZKClient(139): Connect 0x5ff9ef56 to 127.0.0.1:59759 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:59:48,783 DEBUG [RS:0;jenkins-hbase4:37289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@513e0ad9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:59:48,784 DEBUG [RS:0;jenkins-hbase4:37289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55018426, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 09:59:48,792 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37289 2023-05-29 09:59:48,792 INFO [RS:0;jenkins-hbase4:37289] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 09:59:48,792 INFO [RS:0;jenkins-hbase4:37289] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 09:59:48,792 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 09:59:48,793 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,37125,1685354388519 with isa=jenkins-hbase4.apache.org/172.31.14.131:37289, startcode=1685354388557 2023-05-29 09:59:48,793 DEBUG [RS:0;jenkins-hbase4:37289] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 09:59:48,796 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44153, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 09:59:48,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,797 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1 2023-05-29 09:59:48,797 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43865 2023-05-29 09:59:48,797 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 09:59:48,799 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 09:59:48,799 DEBUG [RS:0;jenkins-hbase4:37289] zookeeper.ZKUtil(162): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,799 WARN [RS:0;jenkins-hbase4:37289] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 09:59:48,799 INFO [RS:0;jenkins-hbase4:37289] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:59:48,799 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1946): logDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,799 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37289,1685354388557] 2023-05-29 09:59:48,803 DEBUG [RS:0;jenkins-hbase4:37289] zookeeper.ZKUtil(162): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,804 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 09:59:48,804 INFO [RS:0;jenkins-hbase4:37289] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 09:59:48,805 INFO [RS:0;jenkins-hbase4:37289] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 09:59:48,805 INFO [RS:0;jenkins-hbase4:37289] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 09:59:48,806 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,806 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 09:59:48,807 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,807 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,808 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,808 DEBUG [RS:0;jenkins-hbase4:37289] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 09:59:48,808 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,808 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,808 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,819 INFO [RS:0;jenkins-hbase4:37289] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 09:59:48,819 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37289,1685354388557-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:48,829 INFO [RS:0;jenkins-hbase4:37289] regionserver.Replication(203): jenkins-hbase4.apache.org,37289,1685354388557 started 2023-05-29 09:59:48,829 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37289,1685354388557, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37289, sessionid=0x10076619f190001 2023-05-29 09:59:48,829 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 09:59:48,829 DEBUG [RS:0;jenkins-hbase4:37289] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,829 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37289,1685354388557' 2023-05-29 09:59:48,829 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 09:59:48,829 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37289,1685354388557' 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 09:59:48,830 DEBUG [RS:0;jenkins-hbase4:37289] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 09:59:48,830 INFO [RS:0;jenkins-hbase4:37289] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 09:59:48,830 INFO [RS:0;jenkins-hbase4:37289] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 09:59:48,885 DEBUG [jenkins-hbase4:37125] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 09:59:48,886 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37289,1685354388557, state=OPENING 2023-05-29 09:59:48,888 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 09:59:48,889 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:48,889 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:59:48,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 09:59:48,932 INFO [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37289%2C1685354388557, suffix=, logDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557, archiveDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs, maxLogs=32 2023-05-29 09:59:48,940 INFO [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354388933 2023-05-29 09:59:48,940 DEBUG [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41541,DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5,DISK], DatanodeInfoWithStorage[127.0.0.1:35609,DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514,DISK]] 2023-05-29 09:59:49,044 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:49,044 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 09:59:49,046 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40266, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 09:59:49,049 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 09:59:49,050 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 09:59:49,051 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37289%2C1685354388557.meta, suffix=.meta, logDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557, archiveDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs, maxLogs=32 2023-05-29 09:59:49,058 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.meta.1685354389052.meta 2023-05-29 09:59:49,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35609,DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514,DISK], DatanodeInfoWithStorage[127.0.0.1:41541,DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5,DISK]] 2023-05-29 09:59:49,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:59:49,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 09:59:49,058 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 09:59:49,059 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 09:59:49,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 09:59:49,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:49,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 09:59:49,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 09:59:49,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 09:59:49,064 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info 2023-05-29 09:59:49,064 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info 2023-05-29 09:59:49,064 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 09:59:49,065 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:49,065 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 09:59:49,065 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:59:49,066 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/rep_barrier 2023-05-29 09:59:49,066 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 09:59:49,066 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:49,066 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 09:59:49,067 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/table 2023-05-29 09:59:49,067 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/table 2023-05-29 09:59:49,068 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 09:59:49,068 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:49,069 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740 2023-05-29 09:59:49,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740 2023-05-29 09:59:49,072 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 09:59:49,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 09:59:49,075 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=840012, jitterRate=0.06813152134418488}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 09:59:49,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 09:59:49,077 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354389044 2023-05-29 09:59:49,083 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 09:59:49,084 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 09:59:49,089 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37289,1685354388557, state=OPEN 2023-05-29 09:59:49,091 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 09:59:49,091 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 09:59:49,093 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 09:59:49,093 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37289,1685354388557 in 202 msec 2023-05-29 09:59:49,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 09:59:49,096 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 361 msec 2023-05-29 09:59:49,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 401 msec 2023-05-29 09:59:49,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354389098, completionTime=-1 2023-05-29 09:59:49,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 09:59:49,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 09:59:49,100 DEBUG [hconnection-0x107fdccf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:59:49,103 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:59:49,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 09:59:49,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354449104 2023-05-29 09:59:49,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354509104 2023-05-29 09:59:49,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37125,1685354388519-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37125,1685354388519-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37125,1685354388519-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37125, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 09:59:49,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 09:59:49,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 09:59:49,112 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 09:59:49,113 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 09:59:49,114 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:59:49,115 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:59:49,116 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,117 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a empty. 2023-05-29 09:59:49,117 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,117 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 09:59:49,132 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 09:59:49,133 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 058fa94683023bd6d76721a8be9e197a, NAME => 'hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 058fa94683023bd6d76721a8be9e197a, disabling compactions & flushes 2023-05-29 09:59:49,140 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. after waiting 0 ms 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,140 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,140 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 058fa94683023bd6d76721a8be9e197a: 2023-05-29 09:59:49,143 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:59:49,144 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354389144"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354389144"}]},"ts":"1685354389144"} 2023-05-29 09:59:49,146 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:59:49,147 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:59:49,147 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354389147"}]},"ts":"1685354389147"} 2023-05-29 09:59:49,148 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 09:59:49,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=058fa94683023bd6d76721a8be9e197a, ASSIGN}] 2023-05-29 09:59:49,156 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=058fa94683023bd6d76721a8be9e197a, ASSIGN 2023-05-29 09:59:49,157 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=058fa94683023bd6d76721a8be9e197a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37289,1685354388557; forceNewPlan=false, retain=false 2023-05-29 09:59:49,308 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=058fa94683023bd6d76721a8be9e197a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:49,308 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354389308"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354389308"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354389308"}]},"ts":"1685354389308"} 2023-05-29 09:59:49,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 058fa94683023bd6d76721a8be9e197a, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 09:59:49,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 058fa94683023bd6d76721a8be9e197a, NAME => 'hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:59:49,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:49,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,468 INFO [StoreOpener-058fa94683023bd6d76721a8be9e197a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,469 DEBUG [StoreOpener-058fa94683023bd6d76721a8be9e197a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/info 2023-05-29 09:59:49,469 DEBUG [StoreOpener-058fa94683023bd6d76721a8be9e197a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/info 2023-05-29 09:59:49,469 INFO [StoreOpener-058fa94683023bd6d76721a8be9e197a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 058fa94683023bd6d76721a8be9e197a columnFamilyName info 2023-05-29 09:59:49,470 INFO [StoreOpener-058fa94683023bd6d76721a8be9e197a-1] regionserver.HStore(310): Store=058fa94683023bd6d76721a8be9e197a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:49,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 058fa94683023bd6d76721a8be9e197a 2023-05-29 09:59:49,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:59:49,476 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 058fa94683023bd6d76721a8be9e197a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=762686, jitterRate=-0.030194774270057678}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:59:49,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 058fa94683023bd6d76721a8be9e197a: 2023-05-29 09:59:49,477 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a., pid=6, masterSystemTime=1685354389462 2023-05-29 09:59:49,479 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 09:59:49,481 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=058fa94683023bd6d76721a8be9e197a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:49,481 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354389481"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354389481"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354389481"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354389481"}]},"ts":"1685354389481"} 2023-05-29 09:59:49,484 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 09:59:49,484 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 058fa94683023bd6d76721a8be9e197a, server=jenkins-hbase4.apache.org,37289,1685354388557 in 172 msec 2023-05-29 09:59:49,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 09:59:49,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=058fa94683023bd6d76721a8be9e197a, ASSIGN in 330 msec 2023-05-29 09:59:49,487 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:59:49,488 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354389487"}]},"ts":"1685354389487"} 2023-05-29 09:59:49,489 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 09:59:49,495 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:59:49,497 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 383 msec 2023-05-29 09:59:49,514 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 09:59:49,515 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:59:49,515 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:49,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 09:59:49,526 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:59:49,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-05-29 09:59:49,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 09:59:49,553 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 09:59:49,556 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-05-29 09:59:49,564 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 09:59:49,567 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 09:59:49,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.988sec 2023-05-29 09:59:49,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 09:59:49,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 09:59:49,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 09:59:49,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37125,1685354388519-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 09:59:49,568 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37125,1685354388519-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 09:59:49,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 09:59:49,575 DEBUG [Listener at localhost/32845] zookeeper.ReadOnlyZKClient(139): Connect 0x63710009 to 127.0.0.1:59759 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 09:59:49,578 DEBUG [Listener at localhost/32845] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58da9ce5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 09:59:49,580 DEBUG [hconnection-0x42b2fa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 09:59:49,585 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 09:59:49,586 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 09:59:49,586 INFO [Listener at localhost/32845] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 09:59:49,590 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 09:59:49,590 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 09:59:49,590 INFO [Listener at localhost/32845] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 09:59:49,592 DEBUG [Listener at localhost/32845] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-29 09:59:49,594 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57786, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-29 09:59:49,595 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-29 09:59:49,596 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-29 09:59:49,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 09:59:49,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-29 09:59:49,600 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 09:59:49,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] master.MasterRpcServices(697): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-29 09:59:49,601 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 09:59:49,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:59:49,602 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,603 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe empty. 2023-05-29 09:59:49,603 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,603 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-29 09:59:49,613 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-29 09:59:49,614 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => c53fc971d0411aedd63a16773066d9fe, NAME => 'TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/.tmp 2023-05-29 09:59:49,620 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:49,620 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing c53fc971d0411aedd63a16773066d9fe, disabling compactions & flushes 2023-05-29 09:59:49,620 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,620 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,620 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. after waiting 0 ms 2023-05-29 09:59:49,621 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,621 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,621 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 09:59:49,623 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 09:59:49,623 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354389623"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354389623"}]},"ts":"1685354389623"} 2023-05-29 09:59:49,625 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 09:59:49,626 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 09:59:49,626 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354389626"}]},"ts":"1685354389626"} 2023-05-29 09:59:49,627 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-29 09:59:49,630 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, ASSIGN}] 2023-05-29 09:59:49,631 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, ASSIGN 2023-05-29 09:59:49,632 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37289,1685354388557; forceNewPlan=false, retain=false 2023-05-29 09:59:49,783 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c53fc971d0411aedd63a16773066d9fe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:49,783 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354389783"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354389783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354389783"}]},"ts":"1685354389783"} 2023-05-29 09:59:49,785 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 09:59:49,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c53fc971d0411aedd63a16773066d9fe, NAME => 'TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.', STARTKEY => '', ENDKEY => ''} 2023-05-29 09:59:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 09:59:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,943 INFO [StoreOpener-c53fc971d0411aedd63a16773066d9fe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,945 DEBUG [StoreOpener-c53fc971d0411aedd63a16773066d9fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info 2023-05-29 09:59:49,945 DEBUG [StoreOpener-c53fc971d0411aedd63a16773066d9fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info 2023-05-29 09:59:49,945 INFO [StoreOpener-c53fc971d0411aedd63a16773066d9fe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c53fc971d0411aedd63a16773066d9fe columnFamilyName info 2023-05-29 09:59:49,945 INFO [StoreOpener-c53fc971d0411aedd63a16773066d9fe-1] regionserver.HStore(310): Store=c53fc971d0411aedd63a16773066d9fe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 09:59:49,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:49,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 09:59:49,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c53fc971d0411aedd63a16773066d9fe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=856669, jitterRate=0.08931101858615875}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 09:59:49,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 09:59:49,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe., pid=11, masterSystemTime=1685354389938 2023-05-29 09:59:49,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:49,954 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=c53fc971d0411aedd63a16773066d9fe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 09:59:49,954 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354389954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354389954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354389954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354389954"}]},"ts":"1685354389954"} 2023-05-29 09:59:49,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-29 09:59:49,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557 in 171 msec 2023-05-29 09:59:49,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-29 09:59:49,960 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, ASSIGN in 328 msec 2023-05-29 09:59:49,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 09:59:49,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354389961"}]},"ts":"1685354389961"} 2023-05-29 09:59:49,962 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-29 09:59:49,964 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 09:59:49,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 368 msec 2023-05-29 09:59:52,761 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 09:59:54,804 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 09:59:54,805 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 09:59:54,805 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-29 09:59:59,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37125] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-29 09:59:59,602 INFO [Listener at localhost/32845] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-29 09:59:59,605 DEBUG [Listener at localhost/32845] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-29 09:59:59,605 DEBUG [Listener at localhost/32845] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 09:59:59,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:59,617 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 09:59:59,628 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/13c6f47252ee476991c1ee2e42eb2070 2023-05-29 09:59:59,635 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/13c6f47252ee476991c1ee2e42eb2070 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070 2023-05-29 09:59:59,642 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070, entries=7, sequenceid=11, filesize=12.1 K 2023-05-29 09:59:59,643 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for c53fc971d0411aedd63a16773066d9fe in 26ms, sequenceid=11, compaction requested=false 2023-05-29 09:59:59,644 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 09:59:59,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 09:59:59,644 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-29 09:59:59,659 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=34 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/2b42ab16af734cd1bbbaf396857d88a0 2023-05-29 09:59:59,667 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/2b42ab16af734cd1bbbaf396857d88a0 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0 2023-05-29 09:59:59,672 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0, entries=20, sequenceid=34, filesize=25.8 K 2023-05-29 09:59:59,673 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for c53fc971d0411aedd63a16773066d9fe in 29ms, sequenceid=34, compaction requested=false 2023-05-29 09:59:59,673 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 09:59:59,673 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=37.9 K, sizeToCheck=16.0 K 2023-05-29 09:59:59,673 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 09:59:59,673 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0 because midkey is the same as first or last row 2023-05-29 10:00:01,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:01,653 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:01,664 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=44 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/ff3422156f8d4a8d980eafd63204e11e 2023-05-29 10:00:01,670 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/ff3422156f8d4a8d980eafd63204e11e as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e 2023-05-29 10:00:01,675 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e, entries=7, sequenceid=44, filesize=12.1 K 2023-05-29 10:00:01,676 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for c53fc971d0411aedd63a16773066d9fe in 23ms, sequenceid=44, compaction requested=true 2023-05-29 10:00:01,676 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:01,677 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=50.0 K, sizeToCheck=16.0 K 2023-05-29 10:00:01,677 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:01,677 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0 because midkey is the same as first or last row 2023-05-29 10:00:01,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:01,684 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:01,684 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:01,684 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=26.27 KB heapSize=28.38 KB 2023-05-29 10:00:01,686 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 51218 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:01,687 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): c53fc971d0411aedd63a16773066d9fe/info is initiating minor compaction (all files) 2023-05-29 10:00:01,687 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c53fc971d0411aedd63a16773066d9fe/info in TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:01,687 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp, totalSize=50.0 K 2023-05-29 10:00:01,687 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 13c6f47252ee476991c1ee2e42eb2070, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1685354399608 2023-05-29 10:00:01,688 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 2b42ab16af734cd1bbbaf396857d88a0, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=34, earliestPutTs=1685354399618 2023-05-29 10:00:01,689 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting ff3422156f8d4a8d980eafd63204e11e, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1685354399645 2023-05-29 10:00:01,692 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 10:00:01,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] ipc.CallRunner(144): callId: 72 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:40284 deadline: 1685354411691, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:01,713 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.32 KB at sequenceid=73 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/d8b1dc62b3cd4a81832490de9efda71c 2023-05-29 10:00:01,715 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): c53fc971d0411aedd63a16773066d9fe#info#compaction#29 average throughput is 34.89 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:01,720 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/d8b1dc62b3cd4a81832490de9efda71c as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c 2023-05-29 10:00:01,727 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c, entries=26, sequenceid=73, filesize=32.1 K 2023-05-29 10:00:01,728 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~27.32 KB/27976, heapSize ~29.48 KB/30192, currentSize=3.15 KB/3228 for c53fc971d0411aedd63a16773066d9fe in 44ms, sequenceid=73, compaction requested=false 2023-05-29 10:00:01,728 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:01,728 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=82.1 K, sizeToCheck=16.0 K 2023-05-29 10:00:01,728 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:01,728 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c because midkey is the same as first or last row 2023-05-29 10:00:01,733 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/4cd0311d79e9477ca4b90c757c4a3392 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 2023-05-29 10:00:01,740 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c53fc971d0411aedd63a16773066d9fe/info of c53fc971d0411aedd63a16773066d9fe into 4cd0311d79e9477ca4b90c757c4a3392(size=40.7 K), total size for store is 72.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:01,740 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:01,740 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe., storeName=c53fc971d0411aedd63a16773066d9fe/info, priority=13, startTime=1685354401677; duration=0sec 2023-05-29 10:00:01,740 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.8 K, sizeToCheck=16.0 K 2023-05-29 10:00:01,740 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:01,740 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 because midkey is the same as first or last row 2023-05-29 10:00:01,741 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:13,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:13,753 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:13,766 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=84 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/ace3c58207d342d698cf1b51422f0bf3 2023-05-29 10:00:13,773 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/ace3c58207d342d698cf1b51422f0bf3 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3 2023-05-29 10:00:13,779 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3, entries=7, sequenceid=84, filesize=12.1 K 2023-05-29 10:00:13,780 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for c53fc971d0411aedd63a16773066d9fe in 28ms, sequenceid=84, compaction requested=true 2023-05-29 10:00:13,780 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:13,780 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=84.9 K, sizeToCheck=16.0 K 2023-05-29 10:00:13,780 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:13,780 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 because midkey is the same as first or last row 2023-05-29 10:00:13,780 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:13,780 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:13,781 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-29 10:00:13,782 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 86918 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:13,782 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): c53fc971d0411aedd63a16773066d9fe/info is initiating minor compaction (all files) 2023-05-29 10:00:13,782 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of c53fc971d0411aedd63a16773066d9fe/info in TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:13,782 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp, totalSize=84.9 K 2023-05-29 10:00:13,783 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 4cd0311d79e9477ca4b90c757c4a3392, keycount=34, bloomtype=ROW, size=40.7 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1685354399608 2023-05-29 10:00:13,783 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting d8b1dc62b3cd4a81832490de9efda71c, keycount=26, bloomtype=ROW, size=32.1 K, encoding=NONE, compression=NONE, seqNum=73, earliestPutTs=1685354401654 2023-05-29 10:00:13,785 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting ace3c58207d342d698cf1b51422f0bf3, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=84, earliestPutTs=1685354401685 2023-05-29 10:00:13,801 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): c53fc971d0411aedd63a16773066d9fe#info#compaction#32 average throughput is 34.38 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:13,811 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=109 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/3ac9fd88d0a04312bc99b228e5628ffa 2023-05-29 10:00:13,817 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/3ac9fd88d0a04312bc99b228e5628ffa as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/3ac9fd88d0a04312bc99b228e5628ffa 2023-05-29 10:00:13,820 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/a5e77c6b6f2e496faf67c3216c770a49 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49 2023-05-29 10:00:13,823 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/3ac9fd88d0a04312bc99b228e5628ffa, entries=22, sequenceid=109, filesize=27.9 K 2023-05-29 10:00:13,823 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=7.36 KB/7532 for c53fc971d0411aedd63a16773066d9fe in 43ms, sequenceid=109, compaction requested=false 2023-05-29 10:00:13,824 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:13,824 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=112.8 K, sizeToCheck=16.0 K 2023-05-29 10:00:13,824 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:13,824 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 because midkey is the same as first or last row 2023-05-29 10:00:13,827 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in c53fc971d0411aedd63a16773066d9fe/info of c53fc971d0411aedd63a16773066d9fe into a5e77c6b6f2e496faf67c3216c770a49(size=75.6 K), total size for store is 103.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:13,827 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:13,827 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe., storeName=c53fc971d0411aedd63a16773066d9fe/info, priority=13, startTime=1685354413780; duration=0sec 2023-05-29 10:00:13,827 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=103.5 K, sizeToCheck=16.0 K 2023-05-29 10:00:13,827 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-29 10:00:13,828 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:13,828 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:13,829 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37125] assignment.AssignmentManager(1140): Split request from jenkins-hbase4.apache.org,37289,1685354388557, parent={ENCODED => c53fc971d0411aedd63a16773066d9fe, NAME => 'TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-29 10:00:13,839 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37125] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:13,848 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37125] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c53fc971d0411aedd63a16773066d9fe, daughterA=51a477866ab37ffc2086e938d3a1253c, daughterB=e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:13,849 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c53fc971d0411aedd63a16773066d9fe, daughterA=51a477866ab37ffc2086e938d3a1253c, daughterB=e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:13,849 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c53fc971d0411aedd63a16773066d9fe, daughterA=51a477866ab37ffc2086e938d3a1253c, daughterB=e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:13,849 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c53fc971d0411aedd63a16773066d9fe, daughterA=51a477866ab37ffc2086e938d3a1253c, daughterB=e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:13,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, UNASSIGN}] 2023-05-29 10:00:13,858 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, UNASSIGN 2023-05-29 10:00:13,859 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c53fc971d0411aedd63a16773066d9fe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:13,859 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354413859"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354413859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354413859"}]},"ts":"1685354413859"} 2023-05-29 10:00:13,861 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 10:00:14,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c53fc971d0411aedd63a16773066d9fe, disabling compactions & flushes 2023-05-29 10:00:14,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:14,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:14,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. after waiting 0 ms 2023-05-29 10:00:14,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:14,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c53fc971d0411aedd63a16773066d9fe 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:14,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=120 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/7674952e18164583995673524f76a230 2023-05-29 10:00:14,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.tmp/info/7674952e18164583995673524f76a230 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/7674952e18164583995673524f76a230 2023-05-29 10:00:14,038 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/7674952e18164583995673524f76a230, entries=7, sequenceid=120, filesize=12.1 K 2023-05-29 10:00:14,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for c53fc971d0411aedd63a16773066d9fe in 20ms, sequenceid=120, compaction requested=true 2023-05-29 10:00:14,045 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3] to archive 2023-05-29 10:00:14,045 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 10:00:14,047 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/13c6f47252ee476991c1ee2e42eb2070 2023-05-29 10:00:14,048 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/2b42ab16af734cd1bbbaf396857d88a0 2023-05-29 10:00:14,049 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/4cd0311d79e9477ca4b90c757c4a3392 2023-05-29 10:00:14,050 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ff3422156f8d4a8d980eafd63204e11e 2023-05-29 10:00:14,051 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/d8b1dc62b3cd4a81832490de9efda71c 2023-05-29 10:00:14,052 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/ace3c58207d342d698cf1b51422f0bf3 2023-05-29 10:00:14,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/recovered.edits/123.seqid, newMaxSeqId=123, maxSeqId=1 2023-05-29 10:00:14,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. 2023-05-29 10:00:14,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c53fc971d0411aedd63a16773066d9fe: 2023-05-29 10:00:14,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,061 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c53fc971d0411aedd63a16773066d9fe, regionState=CLOSED 2023-05-29 10:00:14,061 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354414061"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354414061"}]},"ts":"1685354414061"} 2023-05-29 10:00:14,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-29 10:00:14,065 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure c53fc971d0411aedd63a16773066d9fe, server=jenkins-hbase4.apache.org,37289,1685354388557 in 202 msec 2023-05-29 10:00:14,067 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-29 10:00:14,067 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=c53fc971d0411aedd63a16773066d9fe, UNASSIGN in 208 msec 2023-05-29 10:00:14,079 INFO [PEWorker-3] assignment.SplitTableRegionProcedure(694): pid=12 splitting 3 storefiles, region=c53fc971d0411aedd63a16773066d9fe, threads=3 2023-05-29 10:00:14,080 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/3ac9fd88d0a04312bc99b228e5628ffa for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,080 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/7674952e18164583995673524f76a230 for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,080 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49 for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,089 DEBUG [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/7674952e18164583995673524f76a230, top=true 2023-05-29 10:00:14,089 DEBUG [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(700): Will create HFileLink file for hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/3ac9fd88d0a04312bc99b228e5628ffa, top=true 2023-05-29 10:00:14,094 INFO [StoreFileSplitter-pool-1] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.splits/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230 for child: e54cbc0a1aee6e34d271cda4e0336f20, parent: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,094 DEBUG [StoreFileSplitter-pool-1] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/7674952e18164583995673524f76a230 for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,095 INFO [StoreFileSplitter-pool-0] regionserver.HRegionFileSystem(742): Created linkFile:hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/.splits/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa for child: e54cbc0a1aee6e34d271cda4e0336f20, parent: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,095 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/3ac9fd88d0a04312bc99b228e5628ffa for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,111 DEBUG [StoreFileSplitter-pool-2] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49 for region: c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:00:14,111 DEBUG [PEWorker-3] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region c53fc971d0411aedd63a16773066d9fe Daughter A: 1 storefiles, Daughter B: 3 storefiles. 2023-05-29 10:00:14,137 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/recovered.edits/123.seqid, newMaxSeqId=123, maxSeqId=-1 2023-05-29 10:00:14,139 DEBUG [PEWorker-3] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/recovered.edits/123.seqid, newMaxSeqId=123, maxSeqId=-1 2023-05-29 10:00:14,142 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1685354414141"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1685354414141"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1685354414141"}]},"ts":"1685354414141"} 2023-05-29 10:00:14,142 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414141"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354414141"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354414141"}]},"ts":"1685354414141"} 2023-05-29 10:00:14,142 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414141"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354414141"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354414141"}]},"ts":"1685354414141"} 2023-05-29 10:00:14,182 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37289] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-29 10:00:14,182 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-29 10:00:14,182 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-29 10:00:14,191 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=51a477866ab37ffc2086e938d3a1253c, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e54cbc0a1aee6e34d271cda4e0336f20, ASSIGN}] 2023-05-29 10:00:14,192 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e54cbc0a1aee6e34d271cda4e0336f20, ASSIGN 2023-05-29 10:00:14,193 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=51a477866ab37ffc2086e938d3a1253c, ASSIGN 2023-05-29 10:00:14,193 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e54cbc0a1aee6e34d271cda4e0336f20, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,37289,1685354388557; forceNewPlan=false, retain=false 2023-05-29 10:00:14,194 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=51a477866ab37ffc2086e938d3a1253c, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase4.apache.org,37289,1685354388557; forceNewPlan=false, retain=false 2023-05-29 10:00:14,197 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/info/9e4e255811ff4f31953af3d9ecf2175f 2023-05-29 10:00:14,211 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/table/6c09b329ac5e4e62a3ca70c83583696f 2023-05-29 10:00:14,216 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/info/9e4e255811ff4f31953af3d9ecf2175f as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info/9e4e255811ff4f31953af3d9ecf2175f 2023-05-29 10:00:14,221 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info/9e4e255811ff4f31953af3d9ecf2175f, entries=29, sequenceid=17, filesize=8.6 K 2023-05-29 10:00:14,222 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/table/6c09b329ac5e4e62a3ca70c83583696f as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/table/6c09b329ac5e4e62a3ca70c83583696f 2023-05-29 10:00:14,226 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/table/6c09b329ac5e4e62a3ca70c83583696f, entries=4, sequenceid=17, filesize=4.8 K 2023-05-29 10:00:14,227 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4934, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 45ms, sequenceid=17, compaction requested=false 2023-05-29 10:00:14,228 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 10:00:14,345 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=e54cbc0a1aee6e34d271cda4e0336f20, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:14,345 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=51a477866ab37ffc2086e938d3a1253c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:14,345 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354414345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354414345"}]},"ts":"1685354414345"} 2023-05-29 10:00:14,345 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354414345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354414345"}]},"ts":"1685354414345"} 2023-05-29 10:00:14,347 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 10:00:14,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 51a477866ab37ffc2086e938d3a1253c, server=jenkins-hbase4.apache.org,37289,1685354388557}] 2023-05-29 10:00:14,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:00:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 51a477866ab37ffc2086e938d3a1253c, NAME => 'TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-29 10:00:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:00:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,504 INFO [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,504 DEBUG [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info 2023-05-29 10:00:14,504 DEBUG [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info 2023-05-29 10:00:14,505 INFO [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 51a477866ab37ffc2086e938d3a1253c columnFamilyName info 2023-05-29 10:00:14,516 DEBUG [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] regionserver.HStore(539): loaded hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-bottom 2023-05-29 10:00:14,517 INFO [StoreOpener-51a477866ab37ffc2086e938d3a1253c-1] regionserver.HStore(310): Store=51a477866ab37ffc2086e938d3a1253c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:00:14,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:00:14,522 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 51a477866ab37ffc2086e938d3a1253c; next sequenceid=124; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=809386, jitterRate=0.029187843203544617}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 10:00:14,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 51a477866ab37ffc2086e938d3a1253c: 2023-05-29 10:00:14,523 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c., pid=18, masterSystemTime=1685354414499 2023-05-29 10:00:14,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:14,524 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-29 10:00:14,524 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:00:14,524 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): 51a477866ab37ffc2086e938d3a1253c/info is initiating minor compaction (all files) 2023-05-29 10:00:14,524 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 51a477866ab37ffc2086e938d3a1253c/info in TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:00:14,524 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-bottom] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/.tmp, totalSize=75.6 K 2023-05-29 10:00:14,525 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe, keycount=33, bloomtype=ROW, size=75.6 K, encoding=NONE, compression=NONE, seqNum=84, earliestPutTs=1685354399608 2023-05-29 10:00:14,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:00:14,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:00:14,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:14,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e54cbc0a1aee6e34d271cda4e0336f20, NAME => 'TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-29 10:00:14,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:00:14,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,526 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=51a477866ab37ffc2086e938d3a1253c, regionState=OPEN, openSeqNum=124, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:14,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,526 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414526"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354414526"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354414526"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354414526"}]},"ts":"1685354414526"} 2023-05-29 10:00:14,527 INFO [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,528 DEBUG [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info 2023-05-29 10:00:14,528 DEBUG [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info 2023-05-29 10:00:14,529 INFO [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e54cbc0a1aee6e34d271cda4e0336f20 columnFamilyName info 2023-05-29 10:00:14,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-05-29 10:00:14,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 51a477866ab37ffc2086e938d3a1253c, server=jenkins-hbase4.apache.org,37289,1685354388557 in 180 msec 2023-05-29 10:00:14,532 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): 51a477866ab37ffc2086e938d3a1253c#info#compaction#36 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:14,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=51a477866ab37ffc2086e938d3a1253c, ASSIGN in 339 msec 2023-05-29 10:00:14,543 DEBUG [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] regionserver.HStore(539): loaded hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa 2023-05-29 10:00:14,548 DEBUG [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] regionserver.HStore(539): loaded hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230 2023-05-29 10:00:14,549 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/.tmp/info/c001b34235e046dab5eefe05265933ac as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/c001b34235e046dab5eefe05265933ac 2023-05-29 10:00:14,554 DEBUG [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] regionserver.HStore(539): loaded hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-top 2023-05-29 10:00:14,554 INFO [StoreOpener-e54cbc0a1aee6e34d271cda4e0336f20-1] regionserver.HStore(310): Store=e54cbc0a1aee6e34d271cda4e0336f20/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:00:14,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,556 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 51a477866ab37ffc2086e938d3a1253c/info of 51a477866ab37ffc2086e938d3a1253c into c001b34235e046dab5eefe05265933ac(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:14,556 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 51a477866ab37ffc2086e938d3a1253c: 2023-05-29 10:00:14,556 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c., storeName=51a477866ab37ffc2086e938d3a1253c/info, priority=15, startTime=1685354414523; duration=0sec 2023-05-29 10:00:14,556 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:14,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:14,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e54cbc0a1aee6e34d271cda4e0336f20; next sequenceid=124; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=740817, jitterRate=-0.05800269544124603}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 10:00:14,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:14,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., pid=17, masterSystemTime=1685354414499 2023-05-29 10:00:14,560 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:14,562 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:14,564 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:14,564 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:14,564 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:14,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:14,564 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-top, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=115.6 K 2023-05-29 10:00:14,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:14,564 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe, keycount=33, bloomtype=ROW, size=75.6 K, encoding=NONE, compression=NONE, seqNum=85, earliestPutTs=1685354399608 2023-05-29 10:00:14,565 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=e54cbc0a1aee6e34d271cda4e0336f20, regionState=OPEN, openSeqNum=124, regionLocation=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:14,565 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=109, earliestPutTs=1685354413753 2023-05-29 10:00:14,565 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1685354414565"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354414565"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354414565"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354414565"}]},"ts":"1685354414565"} 2023-05-29 10:00:14,565 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685354413781 2023-05-29 10:00:14,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-05-29 10:00:14,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 in 220 msec 2023-05-29 10:00:14,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-05-29 10:00:14,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=e54cbc0a1aee6e34d271cda4e0336f20, ASSIGN in 378 msec 2023-05-29 10:00:14,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=c53fc971d0411aedd63a16773066d9fe, daughterA=51a477866ab37ffc2086e938d3a1253c, daughterB=e54cbc0a1aee6e34d271cda4e0336f20 in 732 msec 2023-05-29 10:00:14,576 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#37 average throughput is 35.92 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:14,589 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/e4e69a6066a74a0aaf1a01dbbe6d125d as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e4e69a6066a74a0aaf1a01dbbe6d125d 2023-05-29 10:00:14,595 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into e4e69a6066a74a0aaf1a01dbbe6d125d(size=41.9 K), total size for store is 41.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:14,595 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:14,595 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354414560; duration=0sec 2023-05-29 10:00:14,595 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:15,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] ipc.CallRunner(144): callId: 107 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:40284 deadline: 1685354425789, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1685354389595.c53fc971d0411aedd63a16773066d9fe. is not online on jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:19,662 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 10:00:25,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:25,858 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:25,872 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=134 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/34df9682ee214c56917e8c3fde8637df 2023-05-29 10:00:25,878 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/34df9682ee214c56917e8c3fde8637df as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df 2023-05-29 10:00:25,884 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df, entries=7, sequenceid=134, filesize=12.1 K 2023-05-29 10:00:25,884 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for e54cbc0a1aee6e34d271cda4e0336f20 in 26ms, sequenceid=134, compaction requested=false 2023-05-29 10:00:25,885 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:25,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:25,885 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-29 10:00:25,898 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=159 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/bf014f44f8a44fffaed6d30e1a499c1b 2023-05-29 10:00:25,904 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/bf014f44f8a44fffaed6d30e1a499c1b as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b 2023-05-29 10:00:25,910 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b, entries=22, sequenceid=159, filesize=27.9 K 2023-05-29 10:00:25,911 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=3.15 KB/3228 for e54cbc0a1aee6e34d271cda4e0336f20 in 26ms, sequenceid=159, compaction requested=true 2023-05-29 10:00:25,911 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:25,911 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:25,911 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:25,912 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83875 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:25,913 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:25,913 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:25,913 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e4e69a6066a74a0aaf1a01dbbe6d125d, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=81.9 K 2023-05-29 10:00:25,913 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting e4e69a6066a74a0aaf1a01dbbe6d125d, keycount=35, bloomtype=ROW, size=41.9 K, encoding=NONE, compression=NONE, seqNum=120, earliestPutTs=1685354401688 2023-05-29 10:00:25,913 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 34df9682ee214c56917e8c3fde8637df, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=134, earliestPutTs=1685354425851 2023-05-29 10:00:25,914 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting bf014f44f8a44fffaed6d30e1a499c1b, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1685354425858 2023-05-29 10:00:25,924 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#40 average throughput is 65.67 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:25,938 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/83bfba303fb1463b834547bda8942f00 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/83bfba303fb1463b834547bda8942f00 2023-05-29 10:00:25,944 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into 83bfba303fb1463b834547bda8942f00(size=72.6 K), total size for store is 72.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:25,944 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:25,944 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354425911; duration=0sec 2023-05-29 10:00:25,944 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:27,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:27,893 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:27,906 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=170 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/77e833416f984d8094b8c75ddb5aabe0 2023-05-29 10:00:27,912 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/77e833416f984d8094b8c75ddb5aabe0 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0 2023-05-29 10:00:27,918 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0, entries=7, sequenceid=170, filesize=12.1 K 2023-05-29 10:00:27,919 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for e54cbc0a1aee6e34d271cda4e0336f20 in 26ms, sequenceid=170, compaction requested=false 2023-05-29 10:00:27,919 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:27,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:27,919 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-29 10:00:27,935 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=196 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/eb49c8ecbad1462a8329cd3a56e3e7a6 2023-05-29 10:00:27,941 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/eb49c8ecbad1462a8329cd3a56e3e7a6 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6 2023-05-29 10:00:27,946 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6, entries=23, sequenceid=196, filesize=29.0 K 2023-05-29 10:00:27,947 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=5.25 KB/5380 for e54cbc0a1aee6e34d271cda4e0336f20 in 28ms, sequenceid=196, compaction requested=true 2023-05-29 10:00:27,947 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:27,947 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:27,947 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:27,948 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 116458 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:27,949 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:27,949 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:27,949 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/83bfba303fb1463b834547bda8942f00, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=113.7 K 2023-05-29 10:00:27,949 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 83bfba303fb1463b834547bda8942f00, keycount=64, bloomtype=ROW, size=72.6 K, encoding=NONE, compression=NONE, seqNum=159, earliestPutTs=1685354401688 2023-05-29 10:00:27,950 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 77e833416f984d8094b8c75ddb5aabe0, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=170, earliestPutTs=1685354425885 2023-05-29 10:00:27,950 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting eb49c8ecbad1462a8329cd3a56e3e7a6, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=196, earliestPutTs=1685354427894 2023-05-29 10:00:27,962 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#43 average throughput is 48.23 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:27,979 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/da54e948ea244e7c9c2f651894703092 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/da54e948ea244e7c9c2f651894703092 2023-05-29 10:00:27,985 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into da54e948ea244e7c9c2f651894703092(size=104.3 K), total size for store is 104.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:27,985 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:27,985 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354427947; duration=0sec 2023-05-29 10:00:27,985 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:29,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:29,928 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:29,956 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 10:00:29,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] ipc.CallRunner(144): callId: 199 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:40284 deadline: 1685354439955, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:30,344 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=207 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/f144e22bd1a44b95a7caf4ad5c314f04 2023-05-29 10:00:30,350 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/f144e22bd1a44b95a7caf4ad5c314f04 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04 2023-05-29 10:00:30,355 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04, entries=7, sequenceid=207, filesize=12.1 K 2023-05-29 10:00:30,356 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=23.12 KB/23672 for e54cbc0a1aee6e34d271cda4e0336f20 in 428ms, sequenceid=207, compaction requested=false 2023-05-29 10:00:30,356 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:34,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=35, reuseRatio=72.92% 2023-05-29 10:00:34,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-29 10:00:40,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:40,037 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=24.17 KB heapSize=26.13 KB 2023-05-29 10:00:40,046 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 10:00:40,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] ipc.CallRunner(144): callId: 208 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:40284 deadline: 1685354450045, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:40,048 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.17 KB at sequenceid=233 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/098ba6983bac46c29120f6872479b141 2023-05-29 10:00:40,054 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/098ba6983bac46c29120f6872479b141 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141 2023-05-29 10:00:40,058 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141, entries=23, sequenceid=233, filesize=29.0 K 2023-05-29 10:00:40,059 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~24.17 KB/24748, heapSize ~26.11 KB/26736, currentSize=6.30 KB/6456 for e54cbc0a1aee6e34d271cda4e0336f20 in 22ms, sequenceid=233, compaction requested=true 2023-05-29 10:00:40,059 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:40,059 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:40,059 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:40,060 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 148908 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:40,060 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:40,060 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:40,060 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/da54e948ea244e7c9c2f651894703092, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=145.4 K 2023-05-29 10:00:40,061 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting da54e948ea244e7c9c2f651894703092, keycount=94, bloomtype=ROW, size=104.3 K, encoding=NONE, compression=NONE, seqNum=196, earliestPutTs=1685354401688 2023-05-29 10:00:40,061 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting f144e22bd1a44b95a7caf4ad5c314f04, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=207, earliestPutTs=1685354427920 2023-05-29 10:00:40,061 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 098ba6983bac46c29120f6872479b141, keycount=23, bloomtype=ROW, size=29.0 K, encoding=NONE, compression=NONE, seqNum=233, earliestPutTs=1685354429929 2023-05-29 10:00:40,069 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#46 average throughput is 127.24 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:40,080 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/d7c0b92ea8fd4d8499974b86436ca575 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d7c0b92ea8fd4d8499974b86436ca575 2023-05-29 10:00:40,085 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into d7c0b92ea8fd4d8499974b86436ca575(size=136.2 K), total size for store is 136.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:40,085 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:40,085 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354440059; duration=0sec 2023-05-29 10:00:40,086 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:41,770 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-29 10:00:50,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:50,101 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:50,112 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=244 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/3d1076a0ee074133b264cfa44313716c 2023-05-29 10:00:50,118 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/3d1076a0ee074133b264cfa44313716c as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c 2023-05-29 10:00:50,123 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c, entries=7, sequenceid=244, filesize=12.1 K 2023-05-29 10:00:50,124 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for e54cbc0a1aee6e34d271cda4e0336f20 in 23ms, sequenceid=244, compaction requested=false 2023-05-29 10:00:50,124 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:52,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:52,109 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:52,120 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=254 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/2f33f6764ce14b309941b6fade7f237b 2023-05-29 10:00:52,126 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/2f33f6764ce14b309941b6fade7f237b as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b 2023-05-29 10:00:52,131 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b, entries=7, sequenceid=254, filesize=12.1 K 2023-05-29 10:00:52,132 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for e54cbc0a1aee6e34d271cda4e0336f20 in 23ms, sequenceid=254, compaction requested=true 2023-05-29 10:00:52,132 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:52,132 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:52,132 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:52,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:52,133 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-29 10:00:52,133 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 164287 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:52,134 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:52,134 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:52,134 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d7c0b92ea8fd4d8499974b86436ca575, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=160.4 K 2023-05-29 10:00:52,134 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting d7c0b92ea8fd4d8499974b86436ca575, keycount=124, bloomtype=ROW, size=136.2 K, encoding=NONE, compression=NONE, seqNum=233, earliestPutTs=1685354401688 2023-05-29 10:00:52,139 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 3d1076a0ee074133b264cfa44313716c, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=244, earliestPutTs=1685354440038 2023-05-29 10:00:52,139 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 2f33f6764ce14b309941b6fade7f237b, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=254, earliestPutTs=1685354452102 2023-05-29 10:00:52,148 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=277 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/e69743bad8df49f89a8159655d96cdd0 2023-05-29 10:00:52,158 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#50 average throughput is 47.20 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:52,158 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/e69743bad8df49f89a8159655d96cdd0 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0 2023-05-29 10:00:52,166 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0, entries=20, sequenceid=277, filesize=25.8 K 2023-05-29 10:00:52,167 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for e54cbc0a1aee6e34d271cda4e0336f20 in 35ms, sequenceid=277, compaction requested=false 2023-05-29 10:00:52,167 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:52,174 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/d1e81c22a13e4d6e83a64b5f25f4657a as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d1e81c22a13e4d6e83a64b5f25f4657a 2023-05-29 10:00:52,179 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into d1e81c22a13e4d6e83a64b5f25f4657a(size=151.0 K), total size for store is 176.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:52,179 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:52,179 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354452132; duration=0sec 2023-05-29 10:00:52,179 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:54,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:54,140 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-29 10:00:54,150 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=288 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/f69d9a0774f84640923ed282e386db85 2023-05-29 10:00:54,156 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/f69d9a0774f84640923ed282e386db85 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85 2023-05-29 10:00:54,162 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85, entries=7, sequenceid=288, filesize=12.1 K 2023-05-29 10:00:54,163 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for e54cbc0a1aee6e34d271cda4e0336f20 in 23ms, sequenceid=288, compaction requested=true 2023-05-29 10:00:54,163 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:54,163 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:00:54,164 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:00:54,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:00:54,165 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-29 10:00:54,165 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 193546 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:00:54,165 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:00:54,165 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:00:54,165 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d1e81c22a13e4d6e83a64b5f25f4657a, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=189.0 K 2023-05-29 10:00:54,166 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting d1e81c22a13e4d6e83a64b5f25f4657a, keycount=138, bloomtype=ROW, size=151.0 K, encoding=NONE, compression=NONE, seqNum=254, earliestPutTs=1685354401688 2023-05-29 10:00:54,167 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting e69743bad8df49f89a8159655d96cdd0, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=277, earliestPutTs=1685354452110 2023-05-29 10:00:54,167 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting f69d9a0774f84640923ed282e386db85, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1685354452133 2023-05-29 10:00:54,178 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-29 10:00:54,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] ipc.CallRunner(144): callId: 274 service: ClientService methodName: Mutate size: 1.2 K connection: 172.31.14.131:40284 deadline: 1685354464178, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=e54cbc0a1aee6e34d271cda4e0336f20, server=jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:00:54,182 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#53 average throughput is 56.44 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:00:54,182 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=311 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/5c995fd42e714d66bbf14177d99e1a0c 2023-05-29 10:00:54,196 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/5c995fd42e714d66bbf14177d99e1a0c as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c 2023-05-29 10:00:54,201 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/bc61864e76de4fbda9fb54f7013c2b45 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bc61864e76de4fbda9fb54f7013c2b45 2023-05-29 10:00:54,201 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c, entries=20, sequenceid=311, filesize=25.8 K 2023-05-29 10:00:54,202 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=9.46 KB/9684 for e54cbc0a1aee6e34d271cda4e0336f20 in 38ms, sequenceid=311, compaction requested=false 2023-05-29 10:00:54,202 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:54,207 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into bc61864e76de4fbda9fb54f7013c2b45(size=179.6 K), total size for store is 205.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:00:54,207 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:00:54,207 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354454163; duration=0sec 2023-05-29 10:00:54,207 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:01:04,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37289] regionserver.HRegion(9158): Flush requested on e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:01:04,206 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing e54cbc0a1aee6e34d271cda4e0336f20 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-29 10:01:04,218 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=325 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/5664571b1e73428797775c45f232d21d 2023-05-29 10:01:04,224 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/5664571b1e73428797775c45f232d21d as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d 2023-05-29 10:01:04,229 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d, entries=10, sequenceid=325, filesize=15.3 K 2023-05-29 10:01:04,230 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=0 B/0 for e54cbc0a1aee6e34d271cda4e0336f20 in 24ms, sequenceid=325, compaction requested=true 2023-05-29 10:01:04,230 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:01:04,230 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:01:04,230 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-29 10:01:04,231 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 226022 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-29 10:01:04,231 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1912): e54cbc0a1aee6e34d271cda4e0336f20/info is initiating minor compaction (all files) 2023-05-29 10:01:04,231 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of e54cbc0a1aee6e34d271cda4e0336f20/info in TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:04,231 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bc61864e76de4fbda9fb54f7013c2b45, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d] into tmpdir=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp, totalSize=220.7 K 2023-05-29 10:01:04,232 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting bc61864e76de4fbda9fb54f7013c2b45, keycount=165, bloomtype=ROW, size=179.6 K, encoding=NONE, compression=NONE, seqNum=288, earliestPutTs=1685354401688 2023-05-29 10:01:04,232 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 5c995fd42e714d66bbf14177d99e1a0c, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=311, earliestPutTs=1685354454141 2023-05-29 10:01:04,232 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] compactions.Compactor(207): Compacting 5664571b1e73428797775c45f232d21d, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=325, earliestPutTs=1685354454165 2023-05-29 10:01:04,243 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] throttle.PressureAwareThroughputController(145): e54cbc0a1aee6e34d271cda4e0336f20#info#compaction#55 average throughput is 66.70 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-29 10:01:04,254 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/.tmp/info/a840c4538c6f443794cf5ccdf28da3b1 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a840c4538c6f443794cf5ccdf28da3b1 2023-05-29 10:01:04,258 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in e54cbc0a1aee6e34d271cda4e0336f20/info of e54cbc0a1aee6e34d271cda4e0336f20 into a840c4538c6f443794cf5ccdf28da3b1(size=211.4 K), total size for store is 211.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-29 10:01:04,258 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:01:04,258 INFO [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., storeName=e54cbc0a1aee6e34d271cda4e0336f20/info, priority=13, startTime=1685354464230; duration=0sec 2023-05-29 10:01:04,258 DEBUG [RS:0;jenkins-hbase4:37289-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-29 10:01:06,207 INFO [Listener at localhost/32845] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-29 10:01:06,223 INFO [Listener at localhost/32845] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354388933 with entries=311, filesize=307.65 KB; new WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466207 2023-05-29 10:01:06,223 DEBUG [Listener at localhost/32845] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35609,DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514,DISK], DatanodeInfoWithStorage[127.0.0.1:41541,DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5,DISK]] 2023-05-29 10:01:06,223 DEBUG [Listener at localhost/32845] wal.AbstractFSWAL(716): hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354388933 is not closed yet, will try archiving it next time 2023-05-29 10:01:06,228 INFO [Listener at localhost/32845] regionserver.HRegion(2745): Flushing 058fa94683023bd6d76721a8be9e197a 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 10:01:06,239 INFO [Listener at localhost/32845] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/.tmp/info/8a58e733611a445f834fea2d5cbd0f0c 2023-05-29 10:01:06,243 DEBUG [Listener at localhost/32845] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/.tmp/info/8a58e733611a445f834fea2d5cbd0f0c as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/info/8a58e733611a445f834fea2d5cbd0f0c 2023-05-29 10:01:06,248 INFO [Listener at localhost/32845] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/info/8a58e733611a445f834fea2d5cbd0f0c, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 10:01:06,249 INFO [Listener at localhost/32845] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 058fa94683023bd6d76721a8be9e197a in 21ms, sequenceid=6, compaction requested=false 2023-05-29 10:01:06,249 DEBUG [Listener at localhost/32845] regionserver.HRegion(2446): Flush status journal for 058fa94683023bd6d76721a8be9e197a: 2023-05-29 10:01:06,249 DEBUG [Listener at localhost/32845] regionserver.HRegion(2446): Flush status journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:01:06,250 DEBUG [Listener at localhost/32845] regionserver.HRegion(2446): Flush status journal for 51a477866ab37ffc2086e938d3a1253c: 2023-05-29 10:01:06,250 INFO [Listener at localhost/32845] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-29 10:01:06,258 INFO [Listener at localhost/32845] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/info/5b373ec755e149fab081384966b36e29 2023-05-29 10:01:06,262 DEBUG [Listener at localhost/32845] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/.tmp/info/5b373ec755e149fab081384966b36e29 as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info/5b373ec755e149fab081384966b36e29 2023-05-29 10:01:06,266 INFO [Listener at localhost/32845] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/info/5b373ec755e149fab081384966b36e29, entries=16, sequenceid=24, filesize=7.0 K 2023-05-29 10:01:06,267 INFO [Listener at localhost/32845] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2312, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 17ms, sequenceid=24, compaction requested=false 2023-05-29 10:01:06,267 DEBUG [Listener at localhost/32845] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-29 10:01:06,274 INFO [Listener at localhost/32845] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466207 with entries=2, filesize=607 B; new WAL /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466267 2023-05-29 10:01:06,274 DEBUG [Listener at localhost/32845] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35609,DS-09afe4b6-cb40-46b4-ad4c-cf52a64fb514,DISK], DatanodeInfoWithStorage[127.0.0.1:41541,DS-ce48ded6-409c-4fb8-b6fb-40516c91d0d5,DISK]] 2023-05-29 10:01:06,274 DEBUG [Listener at localhost/32845] wal.AbstractFSWAL(716): hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466207 is not closed yet, will try archiving it next time 2023-05-29 10:01:06,275 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354388933 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354388933 2023-05-29 10:01:06,275 INFO [Listener at localhost/32845] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-29 10:01:06,277 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466207 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs/jenkins-hbase4.apache.org%2C37289%2C1685354388557.1685354466207 2023-05-29 10:01:06,376 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 10:01:06,376 INFO [Listener at localhost/32845] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-29 10:01:06,376 DEBUG [Listener at localhost/32845] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63710009 to 127.0.0.1:59759 2023-05-29 10:01:06,376 DEBUG [Listener at localhost/32845] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:06,376 DEBUG [Listener at localhost/32845] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 10:01:06,376 DEBUG [Listener at localhost/32845] util.JVMClusterUtil(257): Found active master hash=454731872, stopped=false 2023-05-29 10:01:06,376 INFO [Listener at localhost/32845] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 10:01:06,378 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:06,378 INFO [Listener at localhost/32845] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 10:01:06,378 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:06,378 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:06,379 DEBUG [Listener at localhost/32845] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x052e3ec9 to 127.0.0.1:59759 2023-05-29 10:01:06,379 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:06,379 DEBUG [Listener at localhost/32845] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:06,380 INFO [Listener at localhost/32845] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37289,1685354388557' ***** 2023-05-29 10:01:06,380 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:06,380 INFO [Listener at localhost/32845] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 10:01:06,380 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(3303): Received CLOSE for 058fa94683023bd6d76721a8be9e197a 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(3303): Received CLOSE for e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(3303): Received CLOSE for 51a477866ab37ffc2086e938d3a1253c 2023-05-29 10:01:06,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 058fa94683023bd6d76721a8be9e197a, disabling compactions & flushes 2023-05-29 10:01:06,380 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:01:06,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 10:01:06,381 DEBUG [RS:0;jenkins-hbase4:37289] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5ff9ef56 to 127.0.0.1:59759 2023-05-29 10:01:06,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 10:01:06,381 DEBUG [RS:0;jenkins-hbase4:37289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:06,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. after waiting 0 ms 2023-05-29 10:01:06,381 INFO [RS:0;jenkins-hbase4:37289] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 10:01:06,381 INFO [RS:0;jenkins-hbase4:37289] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 10:01:06,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 10:01:06,381 INFO [RS:0;jenkins-hbase4:37289] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 10:01:06,381 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 10:01:06,381 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-29 10:01:06,381 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1478): Online Regions={058fa94683023bd6d76721a8be9e197a=hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a., e54cbc0a1aee6e34d271cda4e0336f20=TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20., 51a477866ab37ffc2086e938d3a1253c=TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c., 1588230740=hbase:meta,,1.1588230740} 2023-05-29 10:01:06,381 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 10:01:06,381 DEBUG [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1504): Waiting on 058fa94683023bd6d76721a8be9e197a, 1588230740, 51a477866ab37ffc2086e938d3a1253c, e54cbc0a1aee6e34d271cda4e0336f20 2023-05-29 10:01:06,382 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 10:01:06,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 10:01:06,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 10:01:06,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 10:01:06,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/namespace/058fa94683023bd6d76721a8be9e197a/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 10:01:06,390 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-29 10:01:06,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 10:01:06,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 058fa94683023bd6d76721a8be9e197a: 2023-05-29 10:01:06,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685354389111.058fa94683023bd6d76721a8be9e197a. 2023-05-29 10:01:06,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e54cbc0a1aee6e34d271cda4e0336f20, disabling compactions & flushes 2023-05-29 10:01:06,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:06,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:06,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. after waiting 0 ms 2023-05-29 10:01:06,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:06,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 10:01:06,396 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 10:01:06,399 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 10:01:06,400 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 10:01:06,403 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-top, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e4e69a6066a74a0aaf1a01dbbe6d125d, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/83bfba303fb1463b834547bda8942f00, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/da54e948ea244e7c9c2f651894703092, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d7c0b92ea8fd4d8499974b86436ca575, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d1e81c22a13e4d6e83a64b5f25f4657a, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bc61864e76de4fbda9fb54f7013c2b45, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d] to archive 2023-05-29 10:01:06,404 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 10:01:06,406 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:01:06,407 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-3ac9fd88d0a04312bc99b228e5628ffa 2023-05-29 10:01:06,408 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e4e69a6066a74a0aaf1a01dbbe6d125d to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e4e69a6066a74a0aaf1a01dbbe6d125d 2023-05-29 10:01:06,409 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/TestLogRolling-testLogRolling=c53fc971d0411aedd63a16773066d9fe-7674952e18164583995673524f76a230 2023-05-29 10:01:06,411 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/34df9682ee214c56917e8c3fde8637df 2023-05-29 10:01:06,412 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/83bfba303fb1463b834547bda8942f00 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/83bfba303fb1463b834547bda8942f00 2023-05-29 10:01:06,413 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bf014f44f8a44fffaed6d30e1a499c1b 2023-05-29 10:01:06,414 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/77e833416f984d8094b8c75ddb5aabe0 2023-05-29 10:01:06,415 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/da54e948ea244e7c9c2f651894703092 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/da54e948ea244e7c9c2f651894703092 2023-05-29 10:01:06,416 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/eb49c8ecbad1462a8329cd3a56e3e7a6 2023-05-29 10:01:06,417 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f144e22bd1a44b95a7caf4ad5c314f04 2023-05-29 10:01:06,418 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d7c0b92ea8fd4d8499974b86436ca575 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d7c0b92ea8fd4d8499974b86436ca575 2023-05-29 10:01:06,420 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/098ba6983bac46c29120f6872479b141 2023-05-29 10:01:06,421 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/3d1076a0ee074133b264cfa44313716c 2023-05-29 10:01:06,422 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d1e81c22a13e4d6e83a64b5f25f4657a to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/d1e81c22a13e4d6e83a64b5f25f4657a 2023-05-29 10:01:06,423 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/2f33f6764ce14b309941b6fade7f237b 2023-05-29 10:01:06,424 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/e69743bad8df49f89a8159655d96cdd0 2023-05-29 10:01:06,425 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bc61864e76de4fbda9fb54f7013c2b45 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/bc61864e76de4fbda9fb54f7013c2b45 2023-05-29 10:01:06,426 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85 to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/f69d9a0774f84640923ed282e386db85 2023-05-29 10:01:06,427 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5c995fd42e714d66bbf14177d99e1a0c 2023-05-29 10:01:06,428 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/info/5664571b1e73428797775c45f232d21d 2023-05-29 10:01:06,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/e54cbc0a1aee6e34d271cda4e0336f20/recovered.edits/330.seqid, newMaxSeqId=330, maxSeqId=123 2023-05-29 10:01:06,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:06,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e54cbc0a1aee6e34d271cda4e0336f20: 2023-05-29 10:01:06,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1685354413839.e54cbc0a1aee6e34d271cda4e0336f20. 2023-05-29 10:01:06,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 51a477866ab37ffc2086e938d3a1253c, disabling compactions & flushes 2023-05-29 10:01:06,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:01:06,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:01:06,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. after waiting 0 ms 2023-05-29 10:01:06,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:01:06,434 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe->hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/c53fc971d0411aedd63a16773066d9fe/info/a5e77c6b6f2e496faf67c3216c770a49-bottom] to archive 2023-05-29 10:01:06,435 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-29 10:01:06,436 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe to hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/archive/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/info/a5e77c6b6f2e496faf67c3216c770a49.c53fc971d0411aedd63a16773066d9fe 2023-05-29 10:01:06,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/data/default/TestLogRolling-testLogRolling/51a477866ab37ffc2086e938d3a1253c/recovered.edits/128.seqid, newMaxSeqId=128, maxSeqId=123 2023-05-29 10:01:06,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:01:06,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 51a477866ab37ffc2086e938d3a1253c: 2023-05-29 10:01:06,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1685354413839.51a477866ab37ffc2086e938d3a1253c. 2023-05-29 10:01:06,582 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37289,1685354388557; all regions closed. 2023-05-29 10:01:06,583 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:01:06,589 DEBUG [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs 2023-05-29 10:01:06,589 INFO [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37289%2C1685354388557.meta:.meta(num 1685354389052) 2023-05-29 10:01:06,590 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/WALs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:01:06,597 DEBUG [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/oldWALs 2023-05-29 10:01:06,597 INFO [RS:0;jenkins-hbase4:37289] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C37289%2C1685354388557:(num 1685354466267) 2023-05-29 10:01:06,597 DEBUG [RS:0;jenkins-hbase4:37289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:06,597 INFO [RS:0;jenkins-hbase4:37289] regionserver.LeaseManager(133): Closed leases 2023-05-29 10:01:06,597 INFO [RS:0;jenkins-hbase4:37289] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-29 10:01:06,597 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 10:01:06,598 INFO [RS:0;jenkins-hbase4:37289] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37289 2023-05-29 10:01:06,603 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37289,1685354388557 2023-05-29 10:01:06,603 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 10:01:06,603 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 10:01:06,604 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37289,1685354388557] 2023-05-29 10:01:06,604 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37289,1685354388557; numProcessing=1 2023-05-29 10:01:06,605 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37289,1685354388557 already deleted, retry=false 2023-05-29 10:01:06,605 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37289,1685354388557 expired; onlineServers=0 2023-05-29 10:01:06,605 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,37125,1685354388519' ***** 2023-05-29 10:01:06,605 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 10:01:06,606 DEBUG [M:0;jenkins-hbase4:37125] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f252fdb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 10:01:06,606 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 10:01:06,606 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37125,1685354388519; all regions closed. 2023-05-29 10:01:06,606 DEBUG [M:0;jenkins-hbase4:37125] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:06,606 DEBUG [M:0;jenkins-hbase4:37125] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 10:01:06,606 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 10:01:06,606 DEBUG [M:0;jenkins-hbase4:37125] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 10:01:06,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354388700] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354388700,5,FailOnTimeoutGroup] 2023-05-29 10:01:06,607 INFO [M:0;jenkins-hbase4:37125] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 10:01:06,607 INFO [M:0;jenkins-hbase4:37125] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 10:01:06,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354388700] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354388700,5,FailOnTimeoutGroup] 2023-05-29 10:01:06,607 INFO [M:0;jenkins-hbase4:37125] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 10:01:06,608 DEBUG [M:0;jenkins-hbase4:37125] master.HMaster(1512): Stopping service threads 2023-05-29 10:01:06,608 INFO [M:0;jenkins-hbase4:37125] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 10:01:06,608 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 10:01:06,608 ERROR [M:0;jenkins-hbase4:37125] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-29 10:01:06,608 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:06,608 INFO [M:0;jenkins-hbase4:37125] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 10:01:06,608 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 10:01:06,608 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 10:01:06,609 DEBUG [M:0;jenkins-hbase4:37125] zookeeper.ZKUtil(398): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 10:01:06,609 WARN [M:0;jenkins-hbase4:37125] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 10:01:06,609 INFO [M:0;jenkins-hbase4:37125] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 10:01:06,609 INFO [M:0;jenkins-hbase4:37125] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 10:01:06,609 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 10:01:06,609 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:06,609 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:06,609 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 10:01:06,609 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:06,609 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.71 KB heapSize=78.42 KB 2023-05-29 10:01:06,619 INFO [M:0;jenkins-hbase4:37125] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.71 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7335474c167643c9963621803df724ea 2023-05-29 10:01:06,624 INFO [M:0;jenkins-hbase4:37125] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7335474c167643c9963621803df724ea 2023-05-29 10:01:06,625 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7335474c167643c9963621803df724ea as hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7335474c167643c9963621803df724ea 2023-05-29 10:01:06,630 INFO [M:0;jenkins-hbase4:37125] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7335474c167643c9963621803df724ea 2023-05-29 10:01:06,631 INFO [M:0;jenkins-hbase4:37125] regionserver.HStore(1080): Added hdfs://localhost:43865/user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7335474c167643c9963621803df724ea, entries=18, sequenceid=160, filesize=6.9 K 2023-05-29 10:01:06,631 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegion(2948): Finished flush of dataSize ~64.71 KB/66268, heapSize ~78.41 KB/80288, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=160, compaction requested=false 2023-05-29 10:01:06,633 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:06,633 DEBUG [M:0;jenkins-hbase4:37125] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 10:01:06,633 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/39e043a5-8d3b-a164-79a5-9d1dc10875c1/MasterData/WALs/jenkins-hbase4.apache.org,37125,1685354388519 2023-05-29 10:01:06,637 INFO [M:0;jenkins-hbase4:37125] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 10:01:06,637 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 10:01:06,637 INFO [M:0;jenkins-hbase4:37125] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37125 2023-05-29 10:01:06,640 DEBUG [M:0;jenkins-hbase4:37125] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37125,1685354388519 already deleted, retry=false 2023-05-29 10:01:06,704 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:06,704 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): regionserver:37289-0x10076619f190001, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:06,704 INFO [RS:0;jenkins-hbase4:37289] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37289,1685354388557; zookeeper connection closed. 2023-05-29 10:01:06,705 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49cef618] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49cef618 2023-05-29 10:01:06,705 INFO [Listener at localhost/32845] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 10:01:06,804 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:06,804 INFO [M:0;jenkins-hbase4:37125] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37125,1685354388519; zookeeper connection closed. 2023-05-29 10:01:06,805 DEBUG [Listener at localhost/32845-EventThread] zookeeper.ZKWatcher(600): master:37125-0x10076619f190000, quorum=127.0.0.1:59759, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:06,806 WARN [Listener at localhost/32845] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 10:01:06,810 INFO [Listener at localhost/32845] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:06,812 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-29 10:01:06,916 WARN [BP-1903233755-172.31.14.131-1685354387964 heartbeating to localhost/127.0.0.1:43865] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 10:01:06,916 WARN [BP-1903233755-172.31.14.131-1685354387964 heartbeating to localhost/127.0.0.1:43865] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1903233755-172.31.14.131-1685354387964 (Datanode Uuid d4f6772f-5ec4-4b60-9bad-b74e88ae2bcd) service to localhost/127.0.0.1:43865 2023-05-29 10:01:06,916 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/dfs/data/data3/current/BP-1903233755-172.31.14.131-1685354387964] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:06,917 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/dfs/data/data4/current/BP-1903233755-172.31.14.131-1685354387964] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:06,918 WARN [Listener at localhost/32845] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 10:01:06,923 INFO [Listener at localhost/32845] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:07,026 WARN [BP-1903233755-172.31.14.131-1685354387964 heartbeating to localhost/127.0.0.1:43865] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 10:01:07,026 WARN [BP-1903233755-172.31.14.131-1685354387964 heartbeating to localhost/127.0.0.1:43865] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1903233755-172.31.14.131-1685354387964 (Datanode Uuid 4d18200f-2f86-4b2f-839e-f664935e38cd) service to localhost/127.0.0.1:43865 2023-05-29 10:01:07,027 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/dfs/data/data1/current/BP-1903233755-172.31.14.131-1685354387964] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:07,027 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/cluster_54401a04-8f2c-bcfd-05ad-9956c4602e3c/dfs/data/data2/current/BP-1903233755-172.31.14.131-1685354387964] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:07,039 INFO [Listener at localhost/32845] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:07,155 INFO [Listener at localhost/32845] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 10:01:07,184 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 10:01:07,195 INFO [Listener at localhost/32845] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 93) - Thread LEAK? -, OpenFileDescriptor=533 (was 505) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=59 (was 39) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 168), AvailableMemoryMB=2628 (was 2880) 2023-05-29 10:01:07,203 INFO [Listener at localhost/32845] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=533, MaxFileDescriptor=60000, SystemLoadAverage=59, ProcessCount=168, AvailableMemoryMB=2628 2023-05-29 10:01:07,203 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-29 10:01:07,203 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/hadoop.log.dir so I do NOT create it in target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf 2023-05-29 10:01:07,203 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/096b0724-dba8-f296-3a17-2e90c296b495/hadoop.tmp.dir so I do NOT create it in target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672, deleteOnExit=true 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/test.cache.data in system properties and HBase conf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/hadoop.tmp.dir in system properties and HBase conf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/hadoop.log.dir in system properties and HBase conf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-29 10:01:07,204 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-29 10:01:07,204 DEBUG [Listener at localhost/32845] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/nfs.dump.dir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/java.io.tmpdir in system properties and HBase conf 2023-05-29 10:01:07,205 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-29 10:01:07,206 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-29 10:01:07,206 INFO [Listener at localhost/32845] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-29 10:01:07,207 WARN [Listener at localhost/32845] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 10:01:07,210 WARN [Listener at localhost/32845] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 10:01:07,210 WARN [Listener at localhost/32845] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 10:01:07,250 WARN [Listener at localhost/32845] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 10:01:07,252 INFO [Listener at localhost/32845] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 10:01:07,256 INFO [Listener at localhost/32845] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/java.io.tmpdir/Jetty_localhost_42723_hdfs____sqrpr8/webapp 2023-05-29 10:01:07,347 INFO [Listener at localhost/32845] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42723 2023-05-29 10:01:07,348 WARN [Listener at localhost/32845] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-29 10:01:07,351 WARN [Listener at localhost/32845] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-29 10:01:07,351 WARN [Listener at localhost/32845] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-29 10:01:07,390 WARN [Listener at localhost/44481] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 10:01:07,406 WARN [Listener at localhost/44481] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 10:01:07,408 WARN [Listener at localhost/44481] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 10:01:07,410 INFO [Listener at localhost/44481] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 10:01:07,414 INFO [Listener at localhost/44481] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/java.io.tmpdir/Jetty_localhost_38759_datanode____c9dyt9/webapp 2023-05-29 10:01:07,504 INFO [Listener at localhost/44481] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38759 2023-05-29 10:01:07,511 WARN [Listener at localhost/39887] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 10:01:07,521 WARN [Listener at localhost/39887] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-29 10:01:07,523 WARN [Listener at localhost/39887] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-29 10:01:07,524 INFO [Listener at localhost/39887] log.Slf4jLog(67): jetty-6.1.26 2023-05-29 10:01:07,527 INFO [Listener at localhost/39887] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/java.io.tmpdir/Jetty_localhost_46075_datanode____lr3lf7/webapp 2023-05-29 10:01:07,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe7082191109c56b0: Processing first storage report for DS-782fc828-4dde-4626-8980-da2cdfafb7e2 from datanode 8f7df691-5966-48df-ae81-9273c05cf099 2023-05-29 10:01:07,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe7082191109c56b0: from storage DS-782fc828-4dde-4626-8980-da2cdfafb7e2 node DatanodeRegistration(127.0.0.1:40569, datanodeUuid=8f7df691-5966-48df-ae81-9273c05cf099, infoPort=41267, infoSecurePort=0, ipcPort=39887, storageInfo=lv=-57;cid=testClusterID;nsid=1955942279;c=1685354467212), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 10:01:07,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe7082191109c56b0: Processing first storage report for DS-4045f645-63a2-48a8-bc9c-621ca1e2d47b from datanode 8f7df691-5966-48df-ae81-9273c05cf099 2023-05-29 10:01:07,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe7082191109c56b0: from storage DS-4045f645-63a2-48a8-bc9c-621ca1e2d47b node DatanodeRegistration(127.0.0.1:40569, datanodeUuid=8f7df691-5966-48df-ae81-9273c05cf099, infoPort=41267, infoSecurePort=0, ipcPort=39887, storageInfo=lv=-57;cid=testClusterID;nsid=1955942279;c=1685354467212), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 10:01:07,620 INFO [Listener at localhost/39887] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46075 2023-05-29 10:01:07,626 WARN [Listener at localhost/34825] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-29 10:01:07,719 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a68a62964928ad6: Processing first storage report for DS-6acbd064-ea46-423f-9e33-14649f797b71 from datanode cabdfda8-0355-41aa-949a-8d69f6116216 2023-05-29 10:01:07,719 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a68a62964928ad6: from storage DS-6acbd064-ea46-423f-9e33-14649f797b71 node DatanodeRegistration(127.0.0.1:43525, datanodeUuid=cabdfda8-0355-41aa-949a-8d69f6116216, infoPort=37953, infoSecurePort=0, ipcPort=34825, storageInfo=lv=-57;cid=testClusterID;nsid=1955942279;c=1685354467212), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 10:01:07,719 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a68a62964928ad6: Processing first storage report for DS-5d4d5241-abd7-4e5a-9874-d229064f84ca from datanode cabdfda8-0355-41aa-949a-8d69f6116216 2023-05-29 10:01:07,719 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a68a62964928ad6: from storage DS-5d4d5241-abd7-4e5a-9874-d229064f84ca node DatanodeRegistration(127.0.0.1:43525, datanodeUuid=cabdfda8-0355-41aa-949a-8d69f6116216, infoPort=37953, infoSecurePort=0, ipcPort=34825, storageInfo=lv=-57;cid=testClusterID;nsid=1955942279;c=1685354467212), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-29 10:01:07,735 DEBUG [Listener at localhost/34825] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf 2023-05-29 10:01:07,737 INFO [Listener at localhost/34825] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/zookeeper_0, clientPort=64540, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-29 10:01:07,738 INFO [Listener at localhost/34825] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64540 2023-05-29 10:01:07,738 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,739 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,752 INFO [Listener at localhost/34825] util.FSUtils(471): Created version file at hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5 with version=8 2023-05-29 10:01:07,752 INFO [Listener at localhost/34825] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:37765/user/jenkins/test-data/2f36375f-b911-f85e-2999-e3ebf83a94f1/hbase-staging 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 10:01:07,754 INFO [Listener at localhost/34825] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 10:01:07,755 INFO [Listener at localhost/34825] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40547 2023-05-29 10:01:07,756 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,756 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,757 INFO [Listener at localhost/34825] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40547 connecting to ZooKeeper ensemble=127.0.0.1:64540 2023-05-29 10:01:07,763 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:405470x0, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 10:01:07,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40547-0x1007662d49b0000 connected 2023-05-29 10:01:07,776 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 10:01:07,776 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:07,776 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 10:01:07,777 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40547 2023-05-29 10:01:07,777 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40547 2023-05-29 10:01:07,777 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40547 2023-05-29 10:01:07,777 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40547 2023-05-29 10:01:07,777 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40547 2023-05-29 10:01:07,778 INFO [Listener at localhost/34825] master.HMaster(444): hbase.rootdir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5, hbase.cluster.distributed=false 2023-05-29 10:01:07,790 INFO [Listener at localhost/34825] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-05-29 10:01:07,790 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,790 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,790 INFO [Listener at localhost/34825] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-29 10:01:07,790 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-29 10:01:07,791 INFO [Listener at localhost/34825] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-29 10:01:07,791 INFO [Listener at localhost/34825] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-29 10:01:07,792 INFO [Listener at localhost/34825] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33947 2023-05-29 10:01:07,792 INFO [Listener at localhost/34825] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-29 10:01:07,793 DEBUG [Listener at localhost/34825] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-29 10:01:07,793 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,794 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,795 INFO [Listener at localhost/34825] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33947 connecting to ZooKeeper ensemble=127.0.0.1:64540 2023-05-29 10:01:07,798 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:339470x0, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-29 10:01:07,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33947-0x1007662d49b0001 connected 2023-05-29 10:01:07,799 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 10:01:07,799 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:07,800 DEBUG [Listener at localhost/34825] zookeeper.ZKUtil(164): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-29 10:01:07,802 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33947 2023-05-29 10:01:07,802 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33947 2023-05-29 10:01:07,802 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33947 2023-05-29 10:01:07,804 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33947 2023-05-29 10:01:07,806 DEBUG [Listener at localhost/34825] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33947 2023-05-29 10:01:07,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,810 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 10:01:07,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,811 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 10:01:07,811 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-29 10:01:07,811 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 10:01:07,812 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40547,1685354467753 from backup master directory 2023-05-29 10:01:07,812 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-29 10:01:07,814 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,814 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-29 10:01:07,814 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 10:01:07,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,825 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/hbase.id with ID: 9f3d2277-21a8-4515-b693-5c7406c13dcb 2023-05-29 10:01:07,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:07,837 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x588eb8e1 to 127.0.0.1:64540 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 10:01:07,851 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@497523b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 10:01:07,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-29 10:01:07,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-29 10:01:07,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 10:01:07,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store-tmp 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 10:01:07,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:07,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 10:01:07,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/WALs/jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40547%2C1685354467753, suffix=, logDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/WALs/jenkins-hbase4.apache.org,40547,1685354467753, archiveDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/oldWALs, maxLogs=10 2023-05-29 10:01:07,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/WALs/jenkins-hbase4.apache.org,40547,1685354467753/jenkins-hbase4.apache.org%2C40547%2C1685354467753.1685354467864 2023-05-29 10:01:07,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43525,DS-6acbd064-ea46-423f-9e33-14649f797b71,DISK], DatanodeInfoWithStorage[127.0.0.1:40569,DS-782fc828-4dde-4626-8980-da2cdfafb7e2,DISK]] 2023-05-29 10:01:07,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-29 10:01:07,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:07,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,870 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,872 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-29 10:01:07,872 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-29 10:01:07,872 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:07,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,875 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-29 10:01:07,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 10:01:07,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=704517, jitterRate=-0.10416068136692047}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 10:01:07,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 10:01:07,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-29 10:01:07,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-29 10:01:07,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-29 10:01:07,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-29 10:01:07,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-29 10:01:07,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-29 10:01:07,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-29 10:01:07,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-29 10:01:07,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-29 10:01:07,892 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-29 10:01:07,892 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-29 10:01:07,893 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-29 10:01:07,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-29 10:01:07,893 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-29 10:01:07,895 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-29 10:01:07,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-29 10:01:07,896 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-29 10:01:07,897 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:07,897 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:07,897 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,898 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40547,1685354467753, sessionid=0x1007662d49b0000, setting cluster-up flag (Was=false) 2023-05-29 10:01:07,903 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-29 10:01:07,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,911 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:07,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-29 10:01:07,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:07,917 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.hbase-snapshot/.tmp 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:07,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 10:01:07,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1685354497921 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-29 10:01:07,921 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-29 10:01:07,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:07,922 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 10:01:07,922 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-29 10:01:07,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-29 10:01:07,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-29 10:01:07,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-29 10:01:07,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354467923,5,FailOnTimeoutGroup] 2023-05-29 10:01:07,923 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 10:01:07,923 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354467923,5,FailOnTimeoutGroup] 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:07,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:07,931 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 10:01:07,931 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-29 10:01:07,932 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5 2023-05-29 10:01:07,938 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:07,939 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 10:01:07,940 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/info 2023-05-29 10:01:07,941 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 10:01:07,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:07,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 10:01:07,942 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 10:01:07,943 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 10:01:07,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:07,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 10:01:07,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/table 2023-05-29 10:01:07,945 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 10:01:07,945 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:07,945 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740 2023-05-29 10:01:07,946 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740 2023-05-29 10:01:07,947 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 10:01:07,948 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 10:01:07,950 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 10:01:07,950 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=689748, jitterRate=-0.12294110655784607}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 10:01:07,951 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 10:01:07,951 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 10:01:07,951 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 10:01:07,952 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-29 10:01:07,952 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-29 10:01:07,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-29 10:01:07,953 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-29 10:01:07,954 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-29 10:01:08,007 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(951): ClusterId : 9f3d2277-21a8-4515-b693-5c7406c13dcb 2023-05-29 10:01:08,008 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-29 10:01:08,010 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-29 10:01:08,010 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-29 10:01:08,012 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-29 10:01:08,013 DEBUG [RS:0;jenkins-hbase4:33947] zookeeper.ReadOnlyZKClient(139): Connect 0x5d6cdaab to 127.0.0.1:64540 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 10:01:08,016 DEBUG [RS:0;jenkins-hbase4:33947] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53ab3f76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 10:01:08,016 DEBUG [RS:0;jenkins-hbase4:33947] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5794b1db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 10:01:08,025 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33947 2023-05-29 10:01:08,025 INFO [RS:0;jenkins-hbase4:33947] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-29 10:01:08,025 INFO [RS:0;jenkins-hbase4:33947] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-29 10:01:08,025 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1022): About to register with Master. 2023-05-29 10:01:08,026 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase4.apache.org,40547,1685354467753 with isa=jenkins-hbase4.apache.org/172.31.14.131:33947, startcode=1685354467790 2023-05-29 10:01:08,026 DEBUG [RS:0;jenkins-hbase4:33947] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-29 10:01:08,028 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52781, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-29 10:01:08,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40547] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,030 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5 2023-05-29 10:01:08,030 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44481 2023-05-29 10:01:08,030 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-29 10:01:08,032 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 10:01:08,032 DEBUG [RS:0;jenkins-hbase4:33947] zookeeper.ZKUtil(162): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,032 WARN [RS:0;jenkins-hbase4:33947] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-29 10:01:08,033 INFO [RS:0;jenkins-hbase4:33947] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 10:01:08,033 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1946): logDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,033 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33947,1685354467790] 2023-05-29 10:01:08,036 DEBUG [RS:0;jenkins-hbase4:33947] zookeeper.ZKUtil(162): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,037 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-29 10:01:08,037 INFO [RS:0;jenkins-hbase4:33947] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-29 10:01:08,038 INFO [RS:0;jenkins-hbase4:33947] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-29 10:01:08,038 INFO [RS:0;jenkins-hbase4:33947] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-29 10:01:08,038 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,038 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-29 10:01:08,039 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,040 DEBUG [RS:0;jenkins-hbase4:33947] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-05-29 10:01:08,041 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,041 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,041 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,052 INFO [RS:0;jenkins-hbase4:33947] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-29 10:01:08,052 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33947,1685354467790-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,062 INFO [RS:0;jenkins-hbase4:33947] regionserver.Replication(203): jenkins-hbase4.apache.org,33947,1685354467790 started 2023-05-29 10:01:08,062 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33947,1685354467790, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33947, sessionid=0x1007662d49b0001 2023-05-29 10:01:08,062 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-29 10:01:08,062 DEBUG [RS:0;jenkins-hbase4:33947] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,062 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33947,1685354467790' 2023-05-29 10:01:08,062 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-29 10:01:08,062 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33947,1685354467790' 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-29 10:01:08,063 DEBUG [RS:0;jenkins-hbase4:33947] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-29 10:01:08,063 INFO [RS:0;jenkins-hbase4:33947] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-29 10:01:08,063 INFO [RS:0;jenkins-hbase4:33947] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-29 10:01:08,105 DEBUG [jenkins-hbase4:40547] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-29 10:01:08,105 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33947,1685354467790, state=OPENING 2023-05-29 10:01:08,107 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-29 10:01:08,108 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:08,108 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 10:01:08,108 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33947,1685354467790}] 2023-05-29 10:01:08,165 INFO [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33947%2C1685354467790, suffix=, logDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790, archiveDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs, maxLogs=32 2023-05-29 10:01:08,175 INFO [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790/jenkins-hbase4.apache.org%2C33947%2C1685354467790.1685354468166 2023-05-29 10:01:08,175 DEBUG [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43525,DS-6acbd064-ea46-423f-9e33-14649f797b71,DISK], DatanodeInfoWithStorage[127.0.0.1:40569,DS-782fc828-4dde-4626-8980-da2cdfafb7e2,DISK]] 2023-05-29 10:01:08,262 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,262 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-29 10:01:08,265 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60042, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-29 10:01:08,269 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-29 10:01:08,269 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 10:01:08,270 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33947%2C1685354467790.meta, suffix=.meta, logDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790, archiveDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs, maxLogs=32 2023-05-29 10:01:08,281 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790/jenkins-hbase4.apache.org%2C33947%2C1685354467790.meta.1685354468271.meta 2023-05-29 10:01:08,281 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40569,DS-782fc828-4dde-4626-8980-da2cdfafb7e2,DISK], DatanodeInfoWithStorage[127.0.0.1:43525,DS-6acbd064-ea46-423f-9e33-14649f797b71,DISK]] 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-29 10:01:08,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-29 10:01:08,282 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-29 10:01:08,284 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-29 10:01:08,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/info 2023-05-29 10:01:08,285 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/info 2023-05-29 10:01:08,285 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-29 10:01:08,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:08,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-29 10:01:08,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 10:01:08,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/rep_barrier 2023-05-29 10:01:08,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-29 10:01:08,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:08,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-29 10:01:08,289 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/table 2023-05-29 10:01:08,289 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/table 2023-05-29 10:01:08,289 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-29 10:01:08,289 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:08,290 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740 2023-05-29 10:01:08,291 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740 2023-05-29 10:01:08,293 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-29 10:01:08,294 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-29 10:01:08,294 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=760837, jitterRate=-0.032546430826187134}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-29 10:01:08,294 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-29 10:01:08,297 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1685354468262 2023-05-29 10:01:08,301 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-29 10:01:08,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-29 10:01:08,302 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33947,1685354467790, state=OPEN 2023-05-29 10:01:08,303 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-29 10:01:08,303 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-29 10:01:08,305 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-29 10:01:08,305 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33947,1685354467790 in 195 msec 2023-05-29 10:01:08,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-29 10:01:08,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 353 msec 2023-05-29 10:01:08,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 390 msec 2023-05-29 10:01:08,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1685354468308, completionTime=-1 2023-05-29 10:01:08,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-29 10:01:08,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-29 10:01:08,311 DEBUG [hconnection-0x9c42d5f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 10:01:08,313 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60046, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 10:01:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-29 10:01:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1685354528315 2023-05-29 10:01:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1685354588315 2023-05-29 10:01:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1685354467753-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1685354467753-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1685354467753-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40547, period=300000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-29 10:01:08,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-29 10:01:08,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-29 10:01:08,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-29 10:01:08,326 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-29 10:01:08,327 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-29 10:01:08,327 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-29 10:01:08,329 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.tmp/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,329 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.tmp/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da empty. 2023-05-29 10:01:08,330 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.tmp/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,330 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-29 10:01:08,340 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-29 10:01:08,342 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fa72c7bd0592c6e54c43e333ea68f3da, NAME => 'hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/.tmp 2023-05-29 10:01:08,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:08,348 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fa72c7bd0592c6e54c43e333ea68f3da, disabling compactions & flushes 2023-05-29 10:01:08,349 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. after waiting 0 ms 2023-05-29 10:01:08,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,349 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,349 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fa72c7bd0592c6e54c43e333ea68f3da: 2023-05-29 10:01:08,351 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-29 10:01:08,351 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354468351"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1685354468351"}]},"ts":"1685354468351"} 2023-05-29 10:01:08,354 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-29 10:01:08,354 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-29 10:01:08,355 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354468354"}]},"ts":"1685354468354"} 2023-05-29 10:01:08,356 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-29 10:01:08,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fa72c7bd0592c6e54c43e333ea68f3da, ASSIGN}] 2023-05-29 10:01:08,365 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fa72c7bd0592c6e54c43e333ea68f3da, ASSIGN 2023-05-29 10:01:08,366 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fa72c7bd0592c6e54c43e333ea68f3da, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33947,1685354467790; forceNewPlan=false, retain=false 2023-05-29 10:01:08,517 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fa72c7bd0592c6e54c43e333ea68f3da, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354468517"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1685354468517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1685354468517"}]},"ts":"1685354468517"} 2023-05-29 10:01:08,519 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fa72c7bd0592c6e54c43e333ea68f3da, server=jenkins-hbase4.apache.org,33947,1685354467790}] 2023-05-29 10:01:08,674 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fa72c7bd0592c6e54c43e333ea68f3da, NAME => 'hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.', STARTKEY => '', ENDKEY => ''} 2023-05-29 10:01:08,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-29 10:01:08,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,675 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,676 INFO [StoreOpener-fa72c7bd0592c6e54c43e333ea68f3da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,678 DEBUG [StoreOpener-fa72c7bd0592c6e54c43e333ea68f3da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/info 2023-05-29 10:01:08,678 DEBUG [StoreOpener-fa72c7bd0592c6e54c43e333ea68f3da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/info 2023-05-29 10:01:08,678 INFO [StoreOpener-fa72c7bd0592c6e54c43e333ea68f3da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fa72c7bd0592c6e54c43e333ea68f3da columnFamilyName info 2023-05-29 10:01:08,679 INFO [StoreOpener-fa72c7bd0592c6e54c43e333ea68f3da-1] regionserver.HStore(310): Store=fa72c7bd0592c6e54c43e333ea68f3da/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-29 10:01:08,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-29 10:01:08,686 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fa72c7bd0592c6e54c43e333ea68f3da; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=841686, jitterRate=0.07026022672653198}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-29 10:01:08,686 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fa72c7bd0592c6e54c43e333ea68f3da: 2023-05-29 10:01:08,688 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da., pid=6, masterSystemTime=1685354468671 2023-05-29 10:01:08,690 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,690 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,691 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fa72c7bd0592c6e54c43e333ea68f3da, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,691 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1685354468691"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1685354468691"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1685354468691"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1685354468691"}]},"ts":"1685354468691"} 2023-05-29 10:01:08,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-29 10:01:08,694 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fa72c7bd0592c6e54c43e333ea68f3da, server=jenkins-hbase4.apache.org,33947,1685354467790 in 173 msec 2023-05-29 10:01:08,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-29 10:01:08,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fa72c7bd0592c6e54c43e333ea68f3da, ASSIGN in 332 msec 2023-05-29 10:01:08,697 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-29 10:01:08,697 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1685354468697"}]},"ts":"1685354468697"} 2023-05-29 10:01:08,698 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-29 10:01:08,700 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-29 10:01:08,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 375 msec 2023-05-29 10:01:08,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-29 10:01:08,728 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-29 10:01:08,728 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:08,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-29 10:01:08,738 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 10:01:08,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-05-29 10:01:08,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-29 10:01:08,758 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-29 10:01:08,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-05-29 10:01:08,767 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-29 10:01:08,769 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-29 10:01:08,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.955sec 2023-05-29 10:01:08,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-29 10:01:08,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-29 10:01:08,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-29 10:01:08,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1685354467753-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-29 10:01:08,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40547,1685354467753-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-29 10:01:08,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-29 10:01:08,808 DEBUG [Listener at localhost/34825] zookeeper.ReadOnlyZKClient(139): Connect 0x7e546f29 to 127.0.0.1:64540 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-29 10:01:08,814 DEBUG [Listener at localhost/34825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62beba85, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-29 10:01:08,816 DEBUG [hconnection-0x723bd901-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-29 10:01:08,817 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60052, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-29 10:01:08,819 INFO [Listener at localhost/34825] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:08,819 INFO [Listener at localhost/34825] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-29 10:01:08,823 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-29 10:01:08,823 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:08,823 INFO [Listener at localhost/34825] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-29 10:01:08,824 INFO [Listener at localhost/34825] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-29 10:01:08,825 INFO [Listener at localhost/34825] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1, archiveDir=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs, maxLogs=32 2023-05-29 10:01:08,831 INFO [Listener at localhost/34825] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685354468826 2023-05-29 10:01:08,831 DEBUG [Listener at localhost/34825] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40569,DS-782fc828-4dde-4626-8980-da2cdfafb7e2,DISK], DatanodeInfoWithStorage[127.0.0.1:43525,DS-6acbd064-ea46-423f-9e33-14649f797b71,DISK]] 2023-05-29 10:01:08,838 INFO [Listener at localhost/34825] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685354468826 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685354468831 2023-05-29 10:01:08,838 DEBUG [Listener at localhost/34825] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43525,DS-6acbd064-ea46-423f-9e33-14649f797b71,DISK], DatanodeInfoWithStorage[127.0.0.1:40569,DS-782fc828-4dde-4626-8980-da2cdfafb7e2,DISK]] 2023-05-29 10:01:08,838 DEBUG [Listener at localhost/34825] wal.AbstractFSWAL(716): hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685354468826 is not closed yet, will try archiving it next time 2023-05-29 10:01:08,839 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1 2023-05-29 10:01:08,846 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/test.com,8080,1/test.com%2C8080%2C1.1685354468826 to hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs/test.com%2C8080%2C1.1685354468826 2023-05-29 10:01:08,848 DEBUG [Listener at localhost/34825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs 2023-05-29 10:01:08,848 INFO [Listener at localhost/34825] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1685354468831) 2023-05-29 10:01:08,848 INFO [Listener at localhost/34825] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-29 10:01:08,848 DEBUG [Listener at localhost/34825] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7e546f29 to 127.0.0.1:64540 2023-05-29 10:01:08,848 DEBUG [Listener at localhost/34825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:08,849 DEBUG [Listener at localhost/34825] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-29 10:01:08,849 DEBUG [Listener at localhost/34825] util.JVMClusterUtil(257): Found active master hash=1679885594, stopped=false 2023-05-29 10:01:08,849 INFO [Listener at localhost/34825] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:08,851 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:08,851 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-29 10:01:08,851 INFO [Listener at localhost/34825] procedure2.ProcedureExecutor(629): Stopping 2023-05-29 10:01:08,851 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:08,852 DEBUG [Listener at localhost/34825] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x588eb8e1 to 127.0.0.1:64540 2023-05-29 10:01:08,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:08,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-29 10:01:08,852 DEBUG [Listener at localhost/34825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:08,853 INFO [Listener at localhost/34825] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,33947,1685354467790' ***** 2023-05-29 10:01:08,853 INFO [Listener at localhost/34825] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-29 10:01:08,853 INFO [RS:0;jenkins-hbase4:33947] regionserver.HeapMemoryManager(220): Stopping 2023-05-29 10:01:08,853 INFO [RS:0;jenkins-hbase4:33947] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-29 10:01:08,853 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-29 10:01:08,853 INFO [RS:0;jenkins-hbase4:33947] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-29 10:01:08,853 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(3303): Received CLOSE for fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,854 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:08,854 DEBUG [RS:0;jenkins-hbase4:33947] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d6cdaab to 127.0.0.1:64540 2023-05-29 10:01:08,854 DEBUG [RS:0;jenkins-hbase4:33947] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:08,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fa72c7bd0592c6e54c43e333ea68f3da, disabling compactions & flushes 2023-05-29 10:01:08,854 INFO [RS:0;jenkins-hbase4:33947] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-29 10:01:08,854 INFO [RS:0;jenkins-hbase4:33947] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-29 10:01:08,854 INFO [RS:0;jenkins-hbase4:33947] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-29 10:01:08,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,854 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-29 10:01:08,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. after waiting 0 ms 2023-05-29 10:01:08,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fa72c7bd0592c6e54c43e333ea68f3da 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-29 10:01:08,888 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-29 10:01:08,888 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, fa72c7bd0592c6e54c43e333ea68f3da=hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da.} 2023-05-29 10:01:08,889 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-29 10:01:08,889 DEBUG [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1504): Waiting on 1588230740, fa72c7bd0592c6e54c43e333ea68f3da 2023-05-29 10:01:08,889 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-29 10:01:08,889 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-29 10:01:08,889 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-29 10:01:08,889 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-29 10:01:08,890 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-29 10:01:08,912 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/.tmp/info/69ae6f7f034c47b8aee0ee88b368a857 2023-05-29 10:01:08,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/.tmp/info/6ac6208a9ac34b04994357222e714753 2023-05-29 10:01:08,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/.tmp/info/6ac6208a9ac34b04994357222e714753 as hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/info/6ac6208a9ac34b04994357222e714753 2023-05-29 10:01:08,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/info/6ac6208a9ac34b04994357222e714753, entries=2, sequenceid=6, filesize=4.8 K 2023-05-29 10:01:08,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fa72c7bd0592c6e54c43e333ea68f3da in 77ms, sequenceid=6, compaction requested=false 2023-05-29 10:01:08,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-29 10:01:08,938 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/.tmp/table/4dc82f4dcdc64907b1f82484e1ce85e6 2023-05-29 10:01:08,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/namespace/fa72c7bd0592c6e54c43e333ea68f3da/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-29 10:01:08,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fa72c7bd0592c6e54c43e333ea68f3da: 2023-05-29 10:01:08,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1685354468325.fa72c7bd0592c6e54c43e333ea68f3da. 2023-05-29 10:01:08,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/.tmp/info/69ae6f7f034c47b8aee0ee88b368a857 as hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/info/69ae6f7f034c47b8aee0ee88b368a857 2023-05-29 10:01:08,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/info/69ae6f7f034c47b8aee0ee88b368a857, entries=10, sequenceid=9, filesize=5.9 K 2023-05-29 10:01:08,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/.tmp/table/4dc82f4dcdc64907b1f82484e1ce85e6 as hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/table/4dc82f4dcdc64907b1f82484e1ce85e6 2023-05-29 10:01:08,954 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/table/4dc82f4dcdc64907b1f82484e1ce85e6, entries=2, sequenceid=9, filesize=4.7 K 2023-05-29 10:01:08,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1290, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 66ms, sequenceid=9, compaction requested=false 2023-05-29 10:01:08,955 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-29 10:01:08,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-29 10:01:08,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-29 10:01:08,962 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-29 10:01:08,963 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-29 10:01:08,963 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-29 10:01:09,047 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-29 10:01:09,048 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-29 10:01:09,089 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33947,1685354467790; all regions closed. 2023-05-29 10:01:09,090 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:09,095 DEBUG [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs 2023-05-29 10:01:09,096 INFO [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33947%2C1685354467790.meta:.meta(num 1685354468271) 2023-05-29 10:01:09,096 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/WALs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:09,102 DEBUG [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/oldWALs 2023-05-29 10:01:09,102 INFO [RS:0;jenkins-hbase4:33947] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase4.apache.org%2C33947%2C1685354467790:(num 1685354468166) 2023-05-29 10:01:09,102 DEBUG [RS:0;jenkins-hbase4:33947] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:09,102 INFO [RS:0;jenkins-hbase4:33947] regionserver.LeaseManager(133): Closed leases 2023-05-29 10:01:09,102 INFO [RS:0;jenkins-hbase4:33947] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-29 10:01:09,102 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 10:01:09,103 INFO [RS:0;jenkins-hbase4:33947] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33947 2023-05-29 10:01:09,106 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 10:01:09,106 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33947,1685354467790 2023-05-29 10:01:09,106 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-29 10:01:09,106 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33947,1685354467790] 2023-05-29 10:01:09,107 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33947,1685354467790; numProcessing=1 2023-05-29 10:01:09,109 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33947,1685354467790 already deleted, retry=false 2023-05-29 10:01:09,109 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33947,1685354467790 expired; onlineServers=0 2023-05-29 10:01:09,109 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase4.apache.org,40547,1685354467753' ***** 2023-05-29 10:01:09,109 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-29 10:01:09,109 DEBUG [M:0;jenkins-hbase4:40547] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@601e9d46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-05-29 10:01:09,109 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:09,109 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40547,1685354467753; all regions closed. 2023-05-29 10:01:09,109 DEBUG [M:0;jenkins-hbase4:40547] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-29 10:01:09,109 DEBUG [M:0;jenkins-hbase4:40547] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-29 10:01:09,109 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-29 10:01:09,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354467923] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1685354467923,5,FailOnTimeoutGroup] 2023-05-29 10:01:09,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354467923] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1685354467923,5,FailOnTimeoutGroup] 2023-05-29 10:01:09,109 DEBUG [M:0;jenkins-hbase4:40547] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-29 10:01:09,111 INFO [M:0;jenkins-hbase4:40547] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-29 10:01:09,111 INFO [M:0;jenkins-hbase4:40547] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-29 10:01:09,112 INFO [M:0;jenkins-hbase4:40547] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-05-29 10:01:09,112 DEBUG [M:0;jenkins-hbase4:40547] master.HMaster(1512): Stopping service threads 2023-05-29 10:01:09,112 INFO [M:0;jenkins-hbase4:40547] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-29 10:01:09,112 ERROR [M:0;jenkins-hbase4:40547] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-29 10:01:09,112 INFO [M:0;jenkins-hbase4:40547] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-29 10:01:09,112 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-29 10:01:09,112 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-29 10:01:09,112 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-29 10:01:09,113 DEBUG [M:0;jenkins-hbase4:40547] zookeeper.ZKUtil(398): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-29 10:01:09,113 WARN [M:0;jenkins-hbase4:40547] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-29 10:01:09,113 INFO [M:0;jenkins-hbase4:40547] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-29 10:01:09,113 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-29 10:01:09,116 INFO [M:0;jenkins-hbase4:40547] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-29 10:01:09,117 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-29 10:01:09,117 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:09,117 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:09,117 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-29 10:01:09,117 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:09,117 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.07 KB heapSize=29.55 KB 2023-05-29 10:01:09,127 INFO [M:0;jenkins-hbase4:40547] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.07 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8eff4b12c10d4e4db16f5f000c701a41 2023-05-29 10:01:09,131 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8eff4b12c10d4e4db16f5f000c701a41 as hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8eff4b12c10d4e4db16f5f000c701a41 2023-05-29 10:01:09,135 INFO [M:0;jenkins-hbase4:40547] regionserver.HStore(1080): Added hdfs://localhost:44481/user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8eff4b12c10d4e4db16f5f000c701a41, entries=8, sequenceid=66, filesize=6.3 K 2023-05-29 10:01:09,137 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(2948): Finished flush of dataSize ~24.07 KB/24646, heapSize ~29.54 KB/30248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=66, compaction requested=false 2023-05-29 10:01:09,138 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-29 10:01:09,138 DEBUG [M:0;jenkins-hbase4:40547] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-29 10:01:09,139 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/80306a87-40f6-5e2b-f627-241779f646c5/MasterData/WALs/jenkins-hbase4.apache.org,40547,1685354467753 2023-05-29 10:01:09,141 INFO [M:0;jenkins-hbase4:40547] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-29 10:01:09,141 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-29 10:01:09,142 INFO [M:0;jenkins-hbase4:40547] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40547 2023-05-29 10:01:09,145 DEBUG [M:0;jenkins-hbase4:40547] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40547,1685354467753 already deleted, retry=false 2023-05-29 10:01:09,251 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:09,252 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): master:40547-0x1007662d49b0000, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:09,251 INFO [M:0;jenkins-hbase4:40547] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40547,1685354467753; zookeeper connection closed. 2023-05-29 10:01:09,352 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:09,352 INFO [RS:0;jenkins-hbase4:33947] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33947,1685354467790; zookeeper connection closed. 2023-05-29 10:01:09,352 DEBUG [Listener at localhost/34825-EventThread] zookeeper.ZKWatcher(600): regionserver:33947-0x1007662d49b0001, quorum=127.0.0.1:64540, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-29 10:01:09,352 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@840ef50] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@840ef50 2023-05-29 10:01:09,353 INFO [Listener at localhost/34825] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-29 10:01:09,353 WARN [Listener at localhost/34825] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 10:01:09,357 INFO [Listener at localhost/34825] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:09,460 WARN [BP-379838982-172.31.14.131-1685354467212 heartbeating to localhost/127.0.0.1:44481] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 10:01:09,460 WARN [BP-379838982-172.31.14.131-1685354467212 heartbeating to localhost/127.0.0.1:44481] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-379838982-172.31.14.131-1685354467212 (Datanode Uuid cabdfda8-0355-41aa-949a-8d69f6116216) service to localhost/127.0.0.1:44481 2023-05-29 10:01:09,461 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/dfs/data/data3/current/BP-379838982-172.31.14.131-1685354467212] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:09,461 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/dfs/data/data4/current/BP-379838982-172.31.14.131-1685354467212] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:09,462 WARN [Listener at localhost/34825] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-29 10:01:09,465 INFO [Listener at localhost/34825] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:09,567 WARN [BP-379838982-172.31.14.131-1685354467212 heartbeating to localhost/127.0.0.1:44481] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-29 10:01:09,567 WARN [BP-379838982-172.31.14.131-1685354467212 heartbeating to localhost/127.0.0.1:44481] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-379838982-172.31.14.131-1685354467212 (Datanode Uuid 8f7df691-5966-48df-ae81-9273c05cf099) service to localhost/127.0.0.1:44481 2023-05-29 10:01:09,567 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/dfs/data/data1/current/BP-379838982-172.31.14.131-1685354467212] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:09,568 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/526287f8-4c5b-58bb-f7d7-6d5a84a15eaf/cluster_29546bab-c132-1353-ea7f-2c129ec03672/dfs/data/data2/current/BP-379838982-172.31.14.131-1685354467212] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-29 10:01:09,577 INFO [Listener at localhost/34825] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-29 10:01:09,688 INFO [Listener at localhost/34825] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-29 10:01:09,698 INFO [Listener at localhost/34825] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-29 10:01:09,710 INFO [Listener at localhost/34825] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 105) - Thread LEAK? -, OpenFileDescriptor=560 (was 533) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=70 (was 59) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 168), AvailableMemoryMB=2611 (was 2628)