2023-05-24 21:52:42,230 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1 2023-05-24 21:52:42,245 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.regionserver.wal.TestLogRolling timeout: 13 mins 2023-05-24 21:52:42,279 INFO [Time-limited test] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=10, OpenFileDescriptor=264, MaxFileDescriptor=60000, SystemLoadAverage=380, ProcessCount=169, AvailableMemoryMB=11068 2023-05-24 21:52:42,286 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:52:42,287 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590, deleteOnExit=true 2023-05-24 21:52:42,287 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:52:42,288 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/test.cache.data in system properties and HBase conf 2023-05-24 21:52:42,289 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:52:42,289 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:52:42,290 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:52:42,290 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:52:42,290 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:52:42,396 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-05-24 21:52:42,745 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:52:42,748 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:52:42,749 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:52:42,749 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:52:42,749 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:52:42,750 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:52:42,750 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:52:42,750 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:52:42,750 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:52:42,751 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:52:42,751 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:52:42,751 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:52:42,751 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:52:42,752 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:52:42,752 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:52:43,292 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:52:43,304 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:52:43,308 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:52:43,572 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-05-24 21:52:43,714 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-05-24 21:52:43,731 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:52:43,777 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:52:43,813 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/java.io.tmpdir/Jetty_localhost_localdomain_40953_hdfs____rx4jud/webapp 2023-05-24 21:52:44,005 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:40953 2023-05-24 21:52:44,015 WARN [Time-limited test] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:52:44,018 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:52:44,018 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:52:44,490 WARN [Listener at localhost.localdomain/34243] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:52:44,546 WARN [Listener at localhost.localdomain/34243] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:52:44,561 WARN [Listener at localhost.localdomain/34243] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:52:44,567 INFO [Listener at localhost.localdomain/34243] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:52:44,571 INFO [Listener at localhost.localdomain/34243] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/java.io.tmpdir/Jetty_localhost_45923_datanode____lyz62o/webapp 2023-05-24 21:52:44,646 INFO [Listener at localhost.localdomain/34243] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45923 2023-05-24 21:52:44,903 WARN [Listener at localhost.localdomain/43213] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:52:44,911 WARN [Listener at localhost.localdomain/43213] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:52:44,916 WARN [Listener at localhost.localdomain/43213] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:52:44,918 INFO [Listener at localhost.localdomain/43213] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:52:44,922 INFO [Listener at localhost.localdomain/43213] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/java.io.tmpdir/Jetty_localhost_46235_datanode____y136ex/webapp 2023-05-24 21:52:44,996 INFO [Listener at localhost.localdomain/43213] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46235 2023-05-24 21:52:45,007 WARN [Listener at localhost.localdomain/44071] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:52:45,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ad3d5ccb9e171e5: Processing first storage report for DS-256c25c1-aa8a-421f-b13a-5b700690ef21 from datanode 6aeec9eb-a448-49b2-bd46-36a70e7e1eb8 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ad3d5ccb9e171e5: from storage DS-256c25c1-aa8a-421f-b13a-5b700690ef21 node DatanodeRegistration(127.0.0.1:45567, datanodeUuid=6aeec9eb-a448-49b2-bd46-36a70e7e1eb8, infoPort=45069, infoSecurePort=0, ipcPort=43213, storageInfo=lv=-57;cid=testClusterID;nsid=1862947915;c=1684965163379), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x70f29d50ab35197a: Processing first storage report for DS-90a6120e-114a-4b36-8179-08d946e5896b from datanode 1e709f5d-dc9a-4bce-a323-e3856935fee2 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x70f29d50ab35197a: from storage DS-90a6120e-114a-4b36-8179-08d946e5896b node DatanodeRegistration(127.0.0.1:40049, datanodeUuid=1e709f5d-dc9a-4bce-a323-e3856935fee2, infoPort=34381, infoSecurePort=0, ipcPort=44071, storageInfo=lv=-57;cid=testClusterID;nsid=1862947915;c=1684965163379), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8ad3d5ccb9e171e5: Processing first storage report for DS-fb71110b-2ae8-4198-bdba-11832dfa25f7 from datanode 6aeec9eb-a448-49b2-bd46-36a70e7e1eb8 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8ad3d5ccb9e171e5: from storage DS-fb71110b-2ae8-4198-bdba-11832dfa25f7 node DatanodeRegistration(127.0.0.1:45567, datanodeUuid=6aeec9eb-a448-49b2-bd46-36a70e7e1eb8, infoPort=45069, infoSecurePort=0, ipcPort=43213, storageInfo=lv=-57;cid=testClusterID;nsid=1862947915;c=1684965163379), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:52:45,238 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x70f29d50ab35197a: Processing first storage report for DS-c5096996-2fc0-4dd9-b004-0c0950bd2a81 from datanode 1e709f5d-dc9a-4bce-a323-e3856935fee2 2023-05-24 21:52:45,239 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x70f29d50ab35197a: from storage DS-c5096996-2fc0-4dd9-b004-0c0950bd2a81 node DatanodeRegistration(127.0.0.1:40049, datanodeUuid=1e709f5d-dc9a-4bce-a323-e3856935fee2, infoPort=34381, infoSecurePort=0, ipcPort=44071, storageInfo=lv=-57;cid=testClusterID;nsid=1862947915;c=1684965163379), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:52:45,346 DEBUG [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1 2023-05-24 21:52:45,403 INFO [Listener at localhost.localdomain/44071] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/zookeeper_0, clientPort=57676, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:52:45,415 INFO [Listener at localhost.localdomain/44071] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57676 2023-05-24 21:52:45,422 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:45,424 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:46,045 INFO [Listener at localhost.localdomain/44071] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3 with version=8 2023-05-24 21:52:46,046 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:52:46,319 INFO [Listener at localhost.localdomain/44071] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-05-24 21:52:46,673 INFO [Listener at localhost.localdomain/44071] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:52:46,698 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:46,699 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:46,699 INFO [Listener at localhost.localdomain/44071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:52:46,699 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:46,699 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:52:46,810 INFO [Listener at localhost.localdomain/44071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:52:46,868 DEBUG [Listener at localhost.localdomain/44071] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-05-24 21:52:46,942 INFO [Listener at localhost.localdomain/44071] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33421 2023-05-24 21:52:46,951 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:46,954 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:46,972 INFO [Listener at localhost.localdomain/44071] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33421 connecting to ZooKeeper ensemble=127.0.0.1:57676 2023-05-24 21:52:47,045 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:334210x0, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:52:47,047 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33421-0x1017f76217a0000 connected 2023-05-24 21:52:47,068 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:52:47,069 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:52:47,073 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:52:47,083 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33421 2023-05-24 21:52:47,084 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33421 2023-05-24 21:52:47,084 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33421 2023-05-24 21:52:47,085 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33421 2023-05-24 21:52:47,085 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33421 2023-05-24 21:52:47,093 INFO [Listener at localhost.localdomain/44071] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3, hbase.cluster.distributed=false 2023-05-24 21:52:47,153 INFO [Listener at localhost.localdomain/44071] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:52:47,153 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:47,154 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:47,154 INFO [Listener at localhost.localdomain/44071] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:52:47,154 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:52:47,154 INFO [Listener at localhost.localdomain/44071] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:52:47,158 INFO [Listener at localhost.localdomain/44071] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:52:47,161 INFO [Listener at localhost.localdomain/44071] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43575 2023-05-24 21:52:47,163 INFO [Listener at localhost.localdomain/44071] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:52:47,168 DEBUG [Listener at localhost.localdomain/44071] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:52:47,169 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:47,171 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:47,172 INFO [Listener at localhost.localdomain/44071] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43575 connecting to ZooKeeper ensemble=127.0.0.1:57676 2023-05-24 21:52:47,177 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:435750x0, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:52:47,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43575-0x1017f76217a0001 connected 2023-05-24 21:52:47,178 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:52:47,179 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:52:47,180 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ZKUtil(164): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:52:47,181 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43575 2023-05-24 21:52:47,181 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43575 2023-05-24 21:52:47,182 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43575 2023-05-24 21:52:47,182 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43575 2023-05-24 21:52:47,182 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43575 2023-05-24 21:52:47,184 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,192 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:52:47,194 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,212 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:52:47,212 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:52:47,212 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,213 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:52:47,214 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33421,1684965166153 from backup master directory 2023-05-24 21:52:47,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:52:47,216 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,216 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:52:47,217 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:52:47,217 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,219 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-05-24 21:52:47,220 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-05-24 21:52:47,299 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase.id with ID: ffb13cdb-b23a-4f0f-a035-d6c6b3eb990d 2023-05-24 21:52:47,349 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:47,365 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x775a92dc to 127.0.0.1:57676 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:52:47,433 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aceca48, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:52:47,451 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:52:47,453 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:52:47,460 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:52:47,486 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store-tmp 2023-05-24 21:52:47,513 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:47,513 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:52:47,513 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:52:47,513 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:52:47,514 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:52:47,514 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:52:47,514 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:52:47,514 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:52:47,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/WALs/jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33421%2C1684965166153, suffix=, logDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/WALs/jenkins-hbase20.apache.org,33421,1684965166153, archiveDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/oldWALs, maxLogs=10 2023-05-24 21:52:47,552 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:307) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.createWriterInstance(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:200) at org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:220) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:348) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:52:47,574 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/WALs/jenkins-hbase20.apache.org,33421,1684965166153/jenkins-hbase20.apache.org%2C33421%2C1684965166153.1684965167551 2023-05-24 21:52:47,574 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:52:47,575 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:52:47,575 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:47,578 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,579 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,629 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,636 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:52:47,656 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:52:47,668 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:47,674 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,675 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,689 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:52:47,693 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:52:47,694 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=751984, jitterRate=-0.04380340874195099}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:52:47,694 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:52:47,695 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:52:47,712 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:52:47,712 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:52:47,714 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:52:47,716 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-24 21:52:47,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 28 msec 2023-05-24 21:52:47,745 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:52:47,768 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:52:47,773 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:52:47,797 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:52:47,800 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:52:47,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:52:47,806 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:52:47,810 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:52:47,812 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,813 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:52:47,814 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:52:47,824 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:52:47,828 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:52:47,828 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:52:47,828 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33421,1684965166153, sessionid=0x1017f76217a0000, setting cluster-up flag (Was=false) 2023-05-24 21:52:47,843 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,848 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:52:47,849 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,853 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:47,856 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:52:47,857 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:47,859 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.hbase-snapshot/.tmp 2023-05-24 21:52:47,886 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(951): ClusterId : ffb13cdb-b23a-4f0f-a035-d6c6b3eb990d 2023-05-24 21:52:47,889 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:52:47,894 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:52:47,894 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:52:47,897 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:52:47,897 DEBUG [RS:0;jenkins-hbase20:43575] zookeeper.ReadOnlyZKClient(139): Connect 0x023df191 to 127.0.0.1:57676 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:52:47,901 DEBUG [RS:0;jenkins-hbase20:43575] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cfa1fa6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:52:47,902 DEBUG [RS:0;jenkins-hbase20:43575] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8c59ae1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:52:47,921 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43575 2023-05-24 21:52:47,925 INFO [RS:0;jenkins-hbase20:43575] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:52:47,925 INFO [RS:0;jenkins-hbase20:43575] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:52:47,925 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:52:47,927 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,33421,1684965166153 with isa=jenkins-hbase20.apache.org/148.251.75.209:43575, startcode=1684965167152 2023-05-24 21:52:47,940 DEBUG [RS:0;jenkins-hbase20:43575] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:52:47,964 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:52:47,972 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:52:47,973 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:47,975 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965197975 2023-05-24 21:52:47,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:52:47,979 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:52:47,980 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:52:47,984 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:52:47,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:52:47,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:52:47,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:52:47,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:52:47,997 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:52:48,002 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,004 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:52:48,005 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:52:48,005 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:52:48,008 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:52:48,009 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:52:48,012 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965168012,5,FailOnTimeoutGroup] 2023-05-24 21:52:48,013 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965168013,5,FailOnTimeoutGroup] 2023-05-24 21:52:48,013 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,013 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:52:48,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,034 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:52:48,035 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:52:48,036 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3 2023-05-24 21:52:48,058 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:48,062 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:52:48,065 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/info 2023-05-24 21:52:48,066 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:52:48,067 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35901, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:52:48,067 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,068 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:52:48,071 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:52:48,072 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:52:48,073 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,073 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:52:48,076 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/table 2023-05-24 21:52:48,077 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:52:48,078 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,080 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740 2023-05-24 21:52:48,081 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740 2023-05-24 21:52:48,085 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:52:48,087 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:52:48,091 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:52:48,092 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=870898, jitterRate=0.10740460455417633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:52:48,092 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:52:48,092 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:52:48,092 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:52:48,092 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:52:48,092 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:52:48,092 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:52:48,093 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:52:48,094 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:52:48,100 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:52:48,100 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:52:48,101 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3 2023-05-24 21:52:48,101 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34243 2023-05-24 21:52:48,101 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:52:48,105 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:52:48,106 DEBUG [RS:0;jenkins-hbase20:43575] zookeeper.ZKUtil(162): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,106 WARN [RS:0;jenkins-hbase20:43575] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:52:48,106 INFO [RS:0;jenkins-hbase20:43575] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:52:48,107 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,109 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43575,1684965167152] 2023-05-24 21:52:48,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:52:48,118 DEBUG [RS:0;jenkins-hbase20:43575] zookeeper.ZKUtil(162): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,123 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:52:48,126 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:52:48,130 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:52:48,138 INFO [RS:0;jenkins-hbase20:43575] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:52:48,154 INFO [RS:0;jenkins-hbase20:43575] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:52:48,157 INFO [RS:0;jenkins-hbase20:43575] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:52:48,157 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,158 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:52:48,163 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,164 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:52:48,165 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,165 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,165 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,165 DEBUG [RS:0;jenkins-hbase20:43575] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:52:48,166 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,166 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,166 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,178 INFO [RS:0;jenkins-hbase20:43575] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:52:48,181 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43575,1684965167152-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,193 INFO [RS:0;jenkins-hbase20:43575] regionserver.Replication(203): jenkins-hbase20.apache.org,43575,1684965167152 started 2023-05-24 21:52:48,193 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43575,1684965167152, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43575, sessionid=0x1017f76217a0001 2023-05-24 21:52:48,194 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:52:48,194 DEBUG [RS:0;jenkins-hbase20:43575] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,194 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43575,1684965167152' 2023-05-24 21:52:48,194 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:52:48,195 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:52:48,195 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:52:48,195 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:52:48,195 DEBUG [RS:0;jenkins-hbase20:43575] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,196 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43575,1684965167152' 2023-05-24 21:52:48,196 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:52:48,196 DEBUG [RS:0;jenkins-hbase20:43575] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:52:48,197 DEBUG [RS:0;jenkins-hbase20:43575] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:52:48,197 INFO [RS:0;jenkins-hbase20:43575] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:52:48,197 INFO [RS:0;jenkins-hbase20:43575] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:52:48,278 DEBUG [jenkins-hbase20:33421] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:52:48,281 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43575,1684965167152, state=OPENING 2023-05-24 21:52:48,289 DEBUG [PEWorker-5] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:52:48,290 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:48,291 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:52:48,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43575,1684965167152}] 2023-05-24 21:52:48,308 INFO [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43575%2C1684965167152, suffix=, logDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152, archiveDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/oldWALs, maxLogs=32 2023-05-24 21:52:48,323 INFO [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965168312 2023-05-24 21:52:48,324 DEBUG [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:52:48,486 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:48,488 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:52:48,491 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36420, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:52:48,505 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:52:48,506 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:52:48,510 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43575%2C1684965167152.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152, archiveDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/oldWALs, maxLogs=32 2023-05-24 21:52:48,524 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.meta.1684965168511.meta 2023-05-24 21:52:48,524 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:52:48,524 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:52:48,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:52:48,542 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:52:48,546 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:52:48,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:52:48,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:48,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:52:48,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:52:48,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:52:48,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/info 2023-05-24 21:52:48,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/info 2023-05-24 21:52:48,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:52:48,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,558 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:52:48,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:52:48,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:52:48,560 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:52:48,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,561 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:52:48,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/table 2023-05-24 21:52:48,563 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/table 2023-05-24 21:52:48,563 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:52:48,564 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:48,566 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740 2023-05-24 21:52:48,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740 2023-05-24 21:52:48,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:52:48,574 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:52:48,575 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=712790, jitterRate=-0.09364065527915955}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:52:48,575 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:52:48,584 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965168480 2023-05-24 21:52:48,600 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:52:48,601 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:52:48,601 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43575,1684965167152, state=OPEN 2023-05-24 21:52:48,603 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:52:48,603 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:52:48,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:52:48,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43575,1684965167152 in 307 msec 2023-05-24 21:52:48,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:52:48,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 501 msec 2023-05-24 21:52:48,622 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 715 msec 2023-05-24 21:52:48,622 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965168622, completionTime=-1 2023-05-24 21:52:48,623 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:52:48,623 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:52:48,678 DEBUG [hconnection-0xad3076e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:52:48,681 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:52:48,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:52:48,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965228696 2023-05-24 21:52:48,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965288696 2023-05-24 21:52:48,696 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-05-24 21:52:48,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33421,1684965166153-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33421,1684965166153-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33421,1684965166153-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33421, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,724 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:52:48,730 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:52:48,738 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:52:48,739 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:52:48,748 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:52:48,750 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:52:48,752 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:52:48,771 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:48,773 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee empty. 2023-05-24 21:52:48,774 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:48,774 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:52:48,829 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:52:48,832 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e8236e18b14a0c7b530f55341f53a3ee, NAME => 'hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e8236e18b14a0c7b530f55341f53a3ee, disabling compactions & flushes 2023-05-24 21:52:48,851 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. after waiting 0 ms 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:48,851 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:48,851 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e8236e18b14a0c7b530f55341f53a3ee: 2023-05-24 21:52:48,856 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:52:48,872 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965168859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965168859"}]},"ts":"1684965168859"} 2023-05-24 21:52:48,893 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:52:48,895 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:52:48,899 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965168895"}]},"ts":"1684965168895"} 2023-05-24 21:52:48,903 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:52:48,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e8236e18b14a0c7b530f55341f53a3ee, ASSIGN}] 2023-05-24 21:52:48,915 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e8236e18b14a0c7b530f55341f53a3ee, ASSIGN 2023-05-24 21:52:48,917 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e8236e18b14a0c7b530f55341f53a3ee, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43575,1684965167152; forceNewPlan=false, retain=false 2023-05-24 21:52:49,069 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e8236e18b14a0c7b530f55341f53a3ee, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:49,071 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965169069"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965169069"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965169069"}]},"ts":"1684965169069"} 2023-05-24 21:52:49,084 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e8236e18b14a0c7b530f55341f53a3ee, server=jenkins-hbase20.apache.org,43575,1684965167152}] 2023-05-24 21:52:49,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:49,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e8236e18b14a0c7b530f55341f53a3ee, NAME => 'hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:52:49,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:49,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,259 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,261 INFO [StoreOpener-e8236e18b14a0c7b530f55341f53a3ee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,263 DEBUG [StoreOpener-e8236e18b14a0c7b530f55341f53a3ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/info 2023-05-24 21:52:49,263 DEBUG [StoreOpener-e8236e18b14a0c7b530f55341f53a3ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/info 2023-05-24 21:52:49,264 INFO [StoreOpener-e8236e18b14a0c7b530f55341f53a3ee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e8236e18b14a0c7b530f55341f53a3ee columnFamilyName info 2023-05-24 21:52:49,265 INFO [StoreOpener-e8236e18b14a0c7b530f55341f53a3ee-1] regionserver.HStore(310): Store=e8236e18b14a0c7b530f55341f53a3ee/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:49,267 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:52:49,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:52:49,279 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e8236e18b14a0c7b530f55341f53a3ee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=828974, jitterRate=0.0540957897901535}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:52:49,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e8236e18b14a0c7b530f55341f53a3ee: 2023-05-24 21:52:49,281 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee., pid=6, masterSystemTime=1684965169239 2023-05-24 21:52:49,285 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:49,285 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:52:49,287 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e8236e18b14a0c7b530f55341f53a3ee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:49,288 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965169286"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965169286"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965169286"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965169286"}]},"ts":"1684965169286"} 2023-05-24 21:52:49,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:52:49,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e8236e18b14a0c7b530f55341f53a3ee, server=jenkins-hbase20.apache.org,43575,1684965167152 in 208 msec 2023-05-24 21:52:49,300 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:52:49,301 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e8236e18b14a0c7b530f55341f53a3ee, ASSIGN in 386 msec 2023-05-24 21:52:49,302 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:52:49,302 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965169302"}]},"ts":"1684965169302"} 2023-05-24 21:52:49,305 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:52:49,308 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:52:49,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 568 msec 2023-05-24 21:52:49,352 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:52:49,353 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:52:49,353 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:49,395 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:52:49,415 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:52:49,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-05-24 21:52:49,430 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:52:49,442 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:52:49,448 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 17 msec 2023-05-24 21:52:49,460 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:52:49,462 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:52:49,463 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.246sec 2023-05-24 21:52:49,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:52:49,468 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:52:49,468 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:52:49,469 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33421,1684965166153-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:52:49,470 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33421,1684965166153-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:52:49,483 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:52:49,492 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ReadOnlyZKClient(139): Connect 0x23c67614 to 127.0.0.1:57676 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:52:49,497 DEBUG [Listener at localhost.localdomain/44071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a07f602, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:52:49,513 DEBUG [hconnection-0x237646e7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:52:49,527 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36432, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:52:49,536 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:52:49,537 INFO [Listener at localhost.localdomain/44071] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:52:49,543 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:52:49,543 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:52:49,544 INFO [Listener at localhost.localdomain/44071] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:52:49,552 DEBUG [Listener at localhost.localdomain/44071] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 21:52:49,555 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 21:52:49,563 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 21:52:49,563 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 21:52:49,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:52:49,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling 2023-05-24 21:52:49,571 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:52:49,573 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:52:49,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testSlowSyncLogRolling" procId is: 9 2023-05-24 21:52:49,577 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,579 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7 empty. 2023-05-24 21:52:49,581 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,581 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testSlowSyncLogRolling regions 2023-05-24 21:52:49,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:52:49,608 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp/data/default/TestLogRolling-testSlowSyncLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 21:52:49,610 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 396496cac639d9c74d190baee4039fe7, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testSlowSyncLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/.tmp 2023-05-24 21:52:49,624 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:49,625 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1604): Closing 396496cac639d9c74d190baee4039fe7, disabling compactions & flushes 2023-05-24 21:52:49,625 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,625 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,625 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. after waiting 0 ms 2023-05-24 21:52:49,625 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,625 INFO [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,625 DEBUG [RegionOpenAndInit-TestLogRolling-testSlowSyncLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:52:49,630 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:52:49,632 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684965169631"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965169631"}]},"ts":"1684965169631"} 2023-05-24 21:52:49,635 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:52:49,636 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:52:49,637 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965169636"}]},"ts":"1684965169636"} 2023-05-24 21:52:49,639 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLING in hbase:meta 2023-05-24 21:52:49,642 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=396496cac639d9c74d190baee4039fe7, ASSIGN}] 2023-05-24 21:52:49,644 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=396496cac639d9c74d190baee4039fe7, ASSIGN 2023-05-24 21:52:49,645 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=396496cac639d9c74d190baee4039fe7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43575,1684965167152; forceNewPlan=false, retain=false 2023-05-24 21:52:49,797 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=396496cac639d9c74d190baee4039fe7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:49,798 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684965169797"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965169797"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965169797"}]},"ts":"1684965169797"} 2023-05-24 21:52:49,805 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 396496cac639d9c74d190baee4039fe7, server=jenkins-hbase20.apache.org,43575,1684965167152}] 2023-05-24 21:52:49,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 396496cac639d9c74d190baee4039fe7, NAME => 'TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:52:49,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testSlowSyncLogRolling 396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:52:49,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,976 INFO [StoreOpener-396496cac639d9c74d190baee4039fe7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,978 DEBUG [StoreOpener-396496cac639d9c74d190baee4039fe7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info 2023-05-24 21:52:49,978 DEBUG [StoreOpener-396496cac639d9c74d190baee4039fe7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info 2023-05-24 21:52:49,979 INFO [StoreOpener-396496cac639d9c74d190baee4039fe7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 396496cac639d9c74d190baee4039fe7 columnFamilyName info 2023-05-24 21:52:49,980 INFO [StoreOpener-396496cac639d9c74d190baee4039fe7-1] regionserver.HStore(310): Store=396496cac639d9c74d190baee4039fe7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:52:49,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 396496cac639d9c74d190baee4039fe7 2023-05-24 21:52:49,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:52:49,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 396496cac639d9c74d190baee4039fe7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=808766, jitterRate=0.028399616479873657}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:52:49,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:52:49,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7., pid=11, masterSystemTime=1684965169961 2023-05-24 21:52:49,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:52:49,996 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=396496cac639d9c74d190baee4039fe7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:52:49,996 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1684965169996"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965169996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965169996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965169996"}]},"ts":"1684965169996"} 2023-05-24 21:52:50,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 21:52:50,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 396496cac639d9c74d190baee4039fe7, server=jenkins-hbase20.apache.org,43575,1684965167152 in 194 msec 2023-05-24 21:52:50,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 21:52:50,006 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testSlowSyncLogRolling, region=396496cac639d9c74d190baee4039fe7, ASSIGN in 360 msec 2023-05-24 21:52:50,008 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:52:50,008 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testSlowSyncLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965170008"}]},"ts":"1684965170008"} 2023-05-24 21:52:50,010 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testSlowSyncLogRolling, state=ENABLED in hbase:meta 2023-05-24 21:52:50,013 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:52:50,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testSlowSyncLogRolling in 447 msec 2023-05-24 21:52:54,059 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-05-24 21:52:54,136 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 21:52:54,137 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 21:52:54,138 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testSlowSyncLogRolling' 2023-05-24 21:52:56,316 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 21:52:56,317 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-05-24 21:52:59,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33421] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:52:59,602 INFO [Listener at localhost.localdomain/44071] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testSlowSyncLogRolling, procId: 9 completed 2023-05-24 21:52:59,608 DEBUG [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testSlowSyncLogRolling 2023-05-24 21:52:59,609 DEBUG [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:53:11,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43575] regionserver.HRegion(9158): Flush requested on 396496cac639d9c74d190baee4039fe7 2023-05-24 21:53:11,665 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 396496cac639d9c74d190baee4039fe7 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:53:11,755 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/292fc14d2ed24b668341ea89d644d470 2023-05-24 21:53:11,796 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/292fc14d2ed24b668341ea89d644d470 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470 2023-05-24 21:53:11,812 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470, entries=7, sequenceid=11, filesize=12.1 K 2023-05-24 21:53:11,815 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 396496cac639d9c74d190baee4039fe7 in 150ms, sequenceid=11, compaction requested=false 2023-05-24 21:53:11,816 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:53:19,888 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 203 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:22,095 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:24,301 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:26,505 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:26,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43575] regionserver.HRegion(9158): Flush requested on 396496cac639d9c74d190baee4039fe7 2023-05-24 21:53:26,506 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 396496cac639d9c74d190baee4039fe7 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:53:26,708 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:26,729 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=21 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/d47170413b044b00bc0711d383d92d18 2023-05-24 21:53:26,740 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/d47170413b044b00bc0711d383d92d18 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18 2023-05-24 21:53:26,748 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18, entries=7, sequenceid=21, filesize=12.1 K 2023-05-24 21:53:26,950 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 202 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:26,952 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 396496cac639d9c74d190baee4039fe7 in 445ms, sequenceid=21, compaction requested=false 2023-05-24 21:53:26,952 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:53:26,952 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=24.2 K, sizeToCheck=16.0 K 2023-05-24 21:53:26,952 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:53:26,955 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470 because midkey is the same as first or last row 2023-05-24 21:53:28,710 INFO [sync.3] wal.AbstractFSWAL(1141): Slow sync cost: 201 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:30,916 WARN [sync.4] wal.AbstractFSWAL(1302): Requesting log roll because we exceeded slow sync threshold; count=7, threshold=5, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:30,918 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43575%2C1684965167152:(num 1684965168312) roll requested 2023-05-24 21:53:30,918 INFO [sync.4] wal.AbstractFSWAL(1141): Slow sync cost: 205 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:31,139 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 200 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:31,141 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965168312 with entries=24, filesize=20.43 KB; new WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965210918 2023-05-24 21:53:31,142 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:31,142 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965168312 is not closed yet, will try archiving it next time 2023-05-24 21:53:40,934 INFO [Listener at localhost.localdomain/44071] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 21:53:45,938 INFO [sync.0] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:45,938 WARN [sync.0] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:45,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43575] regionserver.HRegion(9158): Flush requested on 396496cac639d9c74d190baee4039fe7 2023-05-24 21:53:45,938 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43575%2C1684965167152:(num 1684965210918) roll requested 2023-05-24 21:53:45,939 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 396496cac639d9c74d190baee4039fe7 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:53:47,940 INFO [Listener at localhost.localdomain/44071] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 21:53:50,941 INFO [sync.1] wal.AbstractFSWAL(1141): Slow sync cost: 5001 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:50,941 WARN [sync.1] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5001 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:50,959 INFO [sync.2] wal.AbstractFSWAL(1141): Slow sync cost: 5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:50,959 WARN [sync.2] wal.AbstractFSWAL(1147): Requesting log roll because we exceeded slow sync threshold; time=5000 ms, threshold=5000 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK], DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK]] 2023-05-24 21:53:50,961 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965210918 with entries=6, filesize=6.07 KB; new WAL /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965225939 2023-05-24 21:53:50,962 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40049,DS-90a6120e-114a-4b36-8179-08d946e5896b,DISK], DatanodeInfoWithStorage[127.0.0.1:45567,DS-256c25c1-aa8a-421f-b13a-5b700690ef21,DISK]] 2023-05-24 21:53:50,962 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152/jenkins-hbase20.apache.org%2C43575%2C1684965167152.1684965210918 is not closed yet, will try archiving it next time 2023-05-24 21:53:50,967 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=31 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/5b60e6d92e86444ab3c843d95bae8349 2023-05-24 21:53:50,978 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/5b60e6d92e86444ab3c843d95bae8349 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349 2023-05-24 21:53:50,987 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349, entries=7, sequenceid=31, filesize=12.1 K 2023-05-24 21:53:50,990 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 396496cac639d9c74d190baee4039fe7 in 5052ms, sequenceid=31, compaction requested=true 2023-05-24 21:53:50,990 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:53:50,990 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=36.3 K, sizeToCheck=16.0 K 2023-05-24 21:53:50,990 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:53:50,990 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470 because midkey is the same as first or last row 2023-05-24 21:53:50,992 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:53:50,992 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:53:50,996 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 37197 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:53:50,998 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HStore(1912): 396496cac639d9c74d190baee4039fe7/info is initiating minor compaction (all files) 2023-05-24 21:53:50,998 INFO [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 396496cac639d9c74d190baee4039fe7/info in TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:53:50,998 INFO [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349] into tmpdir=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp, totalSize=36.3 K 2023-05-24 21:53:51,000 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] compactions.Compactor(207): Compacting 292fc14d2ed24b668341ea89d644d470, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684965179615 2023-05-24 21:53:51,001 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] compactions.Compactor(207): Compacting d47170413b044b00bc0711d383d92d18, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=21, earliestPutTs=1684965193667 2023-05-24 21:53:51,002 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] compactions.Compactor(207): Compacting 5b60e6d92e86444ab3c843d95bae8349, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=31, earliestPutTs=1684965208508 2023-05-24 21:53:51,034 INFO [RS:0;jenkins-hbase20:43575-shortCompactions-0] throttle.PressureAwareThroughputController(145): 396496cac639d9c74d190baee4039fe7#info#compaction#3 average throughput is 10.77 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:53:51,080 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/c3226d59fb794a45be6694dca4ba69a1 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/c3226d59fb794a45be6694dca4ba69a1 2023-05-24 21:53:51,100 INFO [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 396496cac639d9c74d190baee4039fe7/info of 396496cac639d9c74d190baee4039fe7 into c3226d59fb794a45be6694dca4ba69a1(size=27.0 K), total size for store is 27.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:53:51,100 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:53:51,100 INFO [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7., storeName=396496cac639d9c74d190baee4039fe7/info, priority=13, startTime=1684965230991; duration=0sec 2023-05-24 21:53:51,101 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=27.0 K, sizeToCheck=16.0 K 2023-05-24 21:53:51,101 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:53:51,101 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/c3226d59fb794a45be6694dca4ba69a1 because midkey is the same as first or last row 2023-05-24 21:53:51,101 DEBUG [RS:0;jenkins-hbase20:43575-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:54:03,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43575] regionserver.HRegion(9158): Flush requested on 396496cac639d9c74d190baee4039fe7 2023-05-24 21:54:03,080 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 396496cac639d9c74d190baee4039fe7 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:54:03,101 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=42 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/8db82bcb18294733ae1092e8230a5f71 2023-05-24 21:54:03,111 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/8db82bcb18294733ae1092e8230a5f71 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/8db82bcb18294733ae1092e8230a5f71 2023-05-24 21:54:03,120 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/8db82bcb18294733ae1092e8230a5f71, entries=7, sequenceid=42, filesize=12.1 K 2023-05-24 21:54:03,121 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=0 B/0 for 396496cac639d9c74d190baee4039fe7 in 42ms, sequenceid=42, compaction requested=false 2023-05-24 21:54:03,121 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:54:03,122 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=39.1 K, sizeToCheck=16.0 K 2023-05-24 21:54:03,122 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:54:03,122 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/c3226d59fb794a45be6694dca4ba69a1 because midkey is the same as first or last row 2023-05-24 21:54:11,094 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:54:11,096 INFO [Listener at localhost.localdomain/44071] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 21:54:11,096 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x23c67614 to 127.0.0.1:57676 2023-05-24 21:54:11,096 DEBUG [Listener at localhost.localdomain/44071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:54:11,097 DEBUG [Listener at localhost.localdomain/44071] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:54:11,097 DEBUG [Listener at localhost.localdomain/44071] util.JVMClusterUtil(257): Found active master hash=1658197941, stopped=false 2023-05-24 21:54:11,097 INFO [Listener at localhost.localdomain/44071] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:54:11,099 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:54:11,099 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:54:11,100 INFO [Listener at localhost.localdomain/44071] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:54:11,100 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:11,101 DEBUG [Listener at localhost.localdomain/44071] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x775a92dc to 127.0.0.1:57676 2023-05-24 21:54:11,101 DEBUG [Listener at localhost.localdomain/44071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:54:11,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:54:11,102 INFO [Listener at localhost.localdomain/44071] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43575,1684965167152' ***** 2023-05-24 21:54:11,101 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:54:11,102 INFO [Listener at localhost.localdomain/44071] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:54:11,103 INFO [RS:0;jenkins-hbase20:43575] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:54:11,103 INFO [RS:0;jenkins-hbase20:43575] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:54:11,103 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:54:11,103 INFO [RS:0;jenkins-hbase20:43575] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:54:11,104 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(3303): Received CLOSE for 396496cac639d9c74d190baee4039fe7 2023-05-24 21:54:11,105 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(3303): Received CLOSE for e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:54:11,106 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:54:11,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 396496cac639d9c74d190baee4039fe7, disabling compactions & flushes 2023-05-24 21:54:11,106 DEBUG [RS:0;jenkins-hbase20:43575] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x023df191 to 127.0.0.1:57676 2023-05-24 21:54:11,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:54:11,106 DEBUG [RS:0;jenkins-hbase20:43575] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:54:11,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:54:11,106 INFO [RS:0;jenkins-hbase20:43575] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:54:11,106 INFO [RS:0;jenkins-hbase20:43575] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:54:11,106 INFO [RS:0;jenkins-hbase20:43575] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:54:11,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. after waiting 0 ms 2023-05-24 21:54:11,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:54:11,107 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:54:11,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 396496cac639d9c74d190baee4039fe7 1/1 column families, dataSize=3.15 KB heapSize=3.63 KB 2023-05-24 21:54:11,107 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 21:54:11,107 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 396496cac639d9c74d190baee4039fe7=TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7., e8236e18b14a0c7b530f55341f53a3ee=hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee.} 2023-05-24 21:54:11,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:54:11,108 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:54:11,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:54:11,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:54:11,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:54:11,108 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.87 KB heapSize=5.38 KB 2023-05-24 21:54:11,109 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1504): Waiting on 1588230740, 396496cac639d9c74d190baee4039fe7, e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:54:11,128 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.64 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/.tmp/info/9d9204572b824dcc9a4b2279333d8dac 2023-05-24 21:54:11,150 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=232 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/.tmp/table/223a949dad634b11816dc1d776bfb560 2023-05-24 21:54:11,158 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/.tmp/info/9d9204572b824dcc9a4b2279333d8dac as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/info/9d9204572b824dcc9a4b2279333d8dac 2023-05-24 21:54:11,166 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 21:54:11,166 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 21:54:11,167 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/info/9d9204572b824dcc9a4b2279333d8dac, entries=20, sequenceid=14, filesize=7.4 K 2023-05-24 21:54:11,169 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/.tmp/table/223a949dad634b11816dc1d776bfb560 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/table/223a949dad634b11816dc1d776bfb560 2023-05-24 21:54:11,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/table/223a949dad634b11816dc1d776bfb560, entries=4, sequenceid=14, filesize=4.8 K 2023-05-24 21:54:11,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.87 KB/2938, heapSize ~5.09 KB/5216, currentSize=0 B/0 for 1588230740 in 70ms, sequenceid=14, compaction requested=false 2023-05-24 21:54:11,185 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-24 21:54:11,186 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 21:54:11,188 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:54:11,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:54:11,188 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 21:54:11,310 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1504): Waiting on 396496cac639d9c74d190baee4039fe7, e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:54:11,511 DEBUG [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1504): Waiting on 396496cac639d9c74d190baee4039fe7, e8236e18b14a0c7b530f55341f53a3ee 2023-05-24 21:54:11,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.15 KB at sequenceid=48 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/1d03cd5142934ff7a384ff3875816723 2023-05-24 21:54:11,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/.tmp/info/1d03cd5142934ff7a384ff3875816723 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/1d03cd5142934ff7a384ff3875816723 2023-05-24 21:54:11,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/1d03cd5142934ff7a384ff3875816723, entries=3, sequenceid=48, filesize=7.9 K 2023-05-24 21:54:11,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.15 KB/3228, heapSize ~3.61 KB/3696, currentSize=0 B/0 for 396496cac639d9c74d190baee4039fe7 in 448ms, sequenceid=48, compaction requested=true 2023-05-24 21:54:11,558 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349] to archive 2023-05-24 21:54:11,559 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 21:54:11,566 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470 to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/archive/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/292fc14d2ed24b668341ea89d644d470 2023-05-24 21:54:11,568 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18 to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/archive/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/d47170413b044b00bc0711d383d92d18 2023-05-24 21:54:11,570 DEBUG [StoreCloser-TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349 to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/archive/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/info/5b60e6d92e86444ab3c843d95bae8349 2023-05-24 21:54:11,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/default/TestLogRolling-testSlowSyncLogRolling/396496cac639d9c74d190baee4039fe7/recovered.edits/51.seqid, newMaxSeqId=51, maxSeqId=1 2023-05-24 21:54:11,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:54:11,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 396496cac639d9c74d190baee4039fe7: 2023-05-24 21:54:11,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testSlowSyncLogRolling,,1684965169563.396496cac639d9c74d190baee4039fe7. 2023-05-24 21:54:11,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e8236e18b14a0c7b530f55341f53a3ee, disabling compactions & flushes 2023-05-24 21:54:11,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:54:11,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:54:11,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. after waiting 0 ms 2023-05-24 21:54:11,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:54:11,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing e8236e18b14a0c7b530f55341f53a3ee 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:54:11,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/.tmp/info/a3d9f986f517430cab0c8a4a744964fa 2023-05-24 21:54:11,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/.tmp/info/a3d9f986f517430cab0c8a4a744964fa as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/info/a3d9f986f517430cab0c8a4a744964fa 2023-05-24 21:54:11,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/info/a3d9f986f517430cab0c8a4a744964fa, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 21:54:11,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for e8236e18b14a0c7b530f55341f53a3ee in 33ms, sequenceid=6, compaction requested=false 2023-05-24 21:54:11,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/data/hbase/namespace/e8236e18b14a0c7b530f55341f53a3ee/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 21:54:11,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:54:11,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e8236e18b14a0c7b530f55341f53a3ee: 2023-05-24 21:54:11,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684965168738.e8236e18b14a0c7b530f55341f53a3ee. 2023-05-24 21:54:11,711 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43575,1684965167152; all regions closed. 2023-05-24 21:54:11,712 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:54:11,721 DEBUG [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/oldWALs 2023-05-24 21:54:11,721 INFO [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43575%2C1684965167152.meta:.meta(num 1684965168511) 2023-05-24 21:54:11,722 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/WALs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:54:11,733 DEBUG [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/oldWALs 2023-05-24 21:54:11,733 INFO [RS:0;jenkins-hbase20:43575] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C43575%2C1684965167152:(num 1684965225939) 2023-05-24 21:54:11,733 DEBUG [RS:0;jenkins-hbase20:43575] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:54:11,733 INFO [RS:0;jenkins-hbase20:43575] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:54:11,733 INFO [RS:0;jenkins-hbase20:43575] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 21:54:11,734 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:54:11,734 INFO [RS:0;jenkins-hbase20:43575] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43575 2023-05-24 21:54:11,740 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43575,1684965167152 2023-05-24 21:54:11,740 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:54:11,740 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:54:11,741 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43575,1684965167152] 2023-05-24 21:54:11,741 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43575,1684965167152; numProcessing=1 2023-05-24 21:54:11,742 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43575,1684965167152 already deleted, retry=false 2023-05-24 21:54:11,742 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43575,1684965167152 expired; onlineServers=0 2023-05-24 21:54:11,742 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33421,1684965166153' ***** 2023-05-24 21:54:11,742 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:54:11,743 DEBUG [M:0;jenkins-hbase20:33421] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@23eb1d8e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:54:11,743 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:54:11,743 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33421,1684965166153; all regions closed. 2023-05-24 21:54:11,743 DEBUG [M:0;jenkins-hbase20:33421] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:54:11,743 DEBUG [M:0;jenkins-hbase20:33421] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:54:11,744 DEBUG [M:0;jenkins-hbase20:33421] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:54:11,743 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:54:11,744 INFO [M:0;jenkins-hbase20:33421] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:54:11,744 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965168012] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965168012,5,FailOnTimeoutGroup] 2023-05-24 21:54:11,744 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965168013] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965168013,5,FailOnTimeoutGroup] 2023-05-24 21:54:11,744 INFO [M:0;jenkins-hbase20:33421] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:54:11,745 INFO [M:0;jenkins-hbase20:33421] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:54:11,745 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:54:11,745 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:11,745 DEBUG [M:0;jenkins-hbase20:33421] master.HMaster(1512): Stopping service threads 2023-05-24 21:54:11,745 INFO [M:0;jenkins-hbase20:33421] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:54:11,746 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:54:11,746 INFO [M:0;jenkins-hbase20:33421] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:54:11,746 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:54:11,747 DEBUG [M:0;jenkins-hbase20:33421] zookeeper.ZKUtil(398): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:54:11,747 WARN [M:0;jenkins-hbase20:33421] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:54:11,747 INFO [M:0;jenkins-hbase20:33421] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:54:11,747 INFO [M:0;jenkins-hbase20:33421] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:54:11,747 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:54:11,747 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:11,748 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:11,748 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:54:11,748 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:11,748 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.31 KB heapSize=46.76 KB 2023-05-24 21:54:11,766 INFO [M:0;jenkins-hbase20:33421] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.31 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b756aecd52774f4c83790365c3102819 2023-05-24 21:54:11,772 INFO [M:0;jenkins-hbase20:33421] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b756aecd52774f4c83790365c3102819 2023-05-24 21:54:11,773 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b756aecd52774f4c83790365c3102819 as hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b756aecd52774f4c83790365c3102819 2023-05-24 21:54:11,779 INFO [M:0;jenkins-hbase20:33421] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b756aecd52774f4c83790365c3102819 2023-05-24 21:54:11,779 INFO [M:0;jenkins-hbase20:33421] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b756aecd52774f4c83790365c3102819, entries=11, sequenceid=100, filesize=6.1 K 2023-05-24 21:54:11,780 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegion(2948): Finished flush of dataSize ~38.31 KB/39234, heapSize ~46.74 KB/47864, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=100, compaction requested=false 2023-05-24 21:54:11,781 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:11,782 DEBUG [M:0;jenkins-hbase20:33421] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:54:11,782 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/MasterData/WALs/jenkins-hbase20.apache.org,33421,1684965166153 2023-05-24 21:54:11,786 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:54:11,786 INFO [M:0;jenkins-hbase20:33421] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:54:11,787 INFO [M:0;jenkins-hbase20:33421] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33421 2023-05-24 21:54:11,788 DEBUG [M:0;jenkins-hbase20:33421] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33421,1684965166153 already deleted, retry=false 2023-05-24 21:54:11,841 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:54:11,841 INFO [RS:0;jenkins-hbase20:43575] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43575,1684965167152; zookeeper connection closed. 2023-05-24 21:54:11,841 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): regionserver:43575-0x1017f76217a0001, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:54:11,842 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@31246845] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@31246845 2023-05-24 21:54:11,842 INFO [Listener at localhost.localdomain/44071] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 21:54:11,941 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:54:11,941 INFO [M:0;jenkins-hbase20:33421] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33421,1684965166153; zookeeper connection closed. 2023-05-24 21:54:11,942 DEBUG [Listener at localhost.localdomain/44071-EventThread] zookeeper.ZKWatcher(600): master:33421-0x1017f76217a0000, quorum=127.0.0.1:57676, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:54:11,944 WARN [Listener at localhost.localdomain/44071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:11,950 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:12,060 WARN [BP-420647186-148.251.75.209-1684965163379 heartbeating to localhost.localdomain/127.0.0.1:34243] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:12,060 WARN [BP-420647186-148.251.75.209-1684965163379 heartbeating to localhost.localdomain/127.0.0.1:34243] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-420647186-148.251.75.209-1684965163379 (Datanode Uuid 1e709f5d-dc9a-4bce-a323-e3856935fee2) service to localhost.localdomain/127.0.0.1:34243 2023-05-24 21:54:12,063 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/dfs/data/data3/current/BP-420647186-148.251.75.209-1684965163379] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:12,063 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/dfs/data/data4/current/BP-420647186-148.251.75.209-1684965163379] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:12,064 WARN [Listener at localhost.localdomain/44071] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:12,066 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:12,171 WARN [BP-420647186-148.251.75.209-1684965163379 heartbeating to localhost.localdomain/127.0.0.1:34243] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:12,171 WARN [BP-420647186-148.251.75.209-1684965163379 heartbeating to localhost.localdomain/127.0.0.1:34243] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-420647186-148.251.75.209-1684965163379 (Datanode Uuid 6aeec9eb-a448-49b2-bd46-36a70e7e1eb8) service to localhost.localdomain/127.0.0.1:34243 2023-05-24 21:54:12,171 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:54:12,172 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/dfs/data/data1/current/BP-420647186-148.251.75.209-1684965163379] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:12,173 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/cluster_d761d889-339f-37d8-3c89-21d29c2ec590/dfs/data/data2/current/BP-420647186-148.251.75.209-1684965163379] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:12,209 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:54:12,323 INFO [Listener at localhost.localdomain/44071] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:54:12,361 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:54:12,371 INFO [Listener at localhost.localdomain/44071] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testSlowSyncLogRolling Thread=50 (was 10) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:34243 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44071 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@5c51fcf2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:34243 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:327) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:34243 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:34243 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:34243 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) - Thread LEAK? -, OpenFileDescriptor=432 (was 264) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=138 (was 380), ProcessCount=168 (was 169), AvailableMemoryMB=9952 (was 11068) 2023-05-24 21:54:12,378 INFO [Listener at localhost.localdomain/44071] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=51, OpenFileDescriptor=432, MaxFileDescriptor=60000, SystemLoadAverage=138, ProcessCount=168, AvailableMemoryMB=9952 2023-05-24 21:54:12,378 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:54:12,378 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/hadoop.log.dir so I do NOT create it in target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/60cbf3da-eef1-44eb-706e-b10f6d030ed1/hadoop.tmp.dir so I do NOT create it in target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758, deleteOnExit=true 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/test.cache.data in system properties and HBase conf 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:54:12,379 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:54:12,380 DEBUG [Listener at localhost.localdomain/44071] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:54:12,380 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:54:12,381 INFO [Listener at localhost.localdomain/44071] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:54:12,383 WARN [Listener at localhost.localdomain/44071] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:54:12,384 WARN [Listener at localhost.localdomain/44071] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:54:12,384 WARN [Listener at localhost.localdomain/44071] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:54:12,408 WARN [Listener at localhost.localdomain/44071] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:12,410 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:12,415 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_localdomain_38459_hdfs____j393xp/webapp 2023-05-24 21:54:12,489 INFO [Listener at localhost.localdomain/44071] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38459 2023-05-24 21:54:12,490 WARN [Listener at localhost.localdomain/44071] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:54:12,492 WARN [Listener at localhost.localdomain/44071] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:54:12,492 WARN [Listener at localhost.localdomain/44071] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:54:12,519 WARN [Listener at localhost.localdomain/43361] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:12,529 WARN [Listener at localhost.localdomain/43361] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:12,533 WARN [Listener at localhost.localdomain/43361] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:12,535 INFO [Listener at localhost.localdomain/43361] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:12,540 INFO [Listener at localhost.localdomain/43361] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_45741_datanode____.ywrfeu/webapp 2023-05-24 21:54:12,611 INFO [Listener at localhost.localdomain/43361] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45741 2023-05-24 21:54:12,618 WARN [Listener at localhost.localdomain/35267] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:12,631 WARN [Listener at localhost.localdomain/35267] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:12,633 WARN [Listener at localhost.localdomain/35267] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:12,634 INFO [Listener at localhost.localdomain/35267] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:12,638 INFO [Listener at localhost.localdomain/35267] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_40463_datanode____.mvw0pw/webapp 2023-05-24 21:54:12,687 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xba10840f7452f33a: Processing first storage report for DS-68111bb2-9654-4ebb-81c4-cb070842208b from datanode 3bcab2c3-fdd7-4742-b150-f7b0f82901e7 2023-05-24 21:54:12,688 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xba10840f7452f33a: from storage DS-68111bb2-9654-4ebb-81c4-cb070842208b node DatanodeRegistration(127.0.0.1:34865, datanodeUuid=3bcab2c3-fdd7-4742-b150-f7b0f82901e7, infoPort=33769, infoSecurePort=0, ipcPort=35267, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:12,688 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xba10840f7452f33a: Processing first storage report for DS-033b7338-ec79-419f-bd8a-9917f7dd06a7 from datanode 3bcab2c3-fdd7-4742-b150-f7b0f82901e7 2023-05-24 21:54:12,688 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xba10840f7452f33a: from storage DS-033b7338-ec79-419f-bd8a-9917f7dd06a7 node DatanodeRegistration(127.0.0.1:34865, datanodeUuid=3bcab2c3-fdd7-4742-b150-f7b0f82901e7, infoPort=33769, infoSecurePort=0, ipcPort=35267, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:12,722 INFO [Listener at localhost.localdomain/35267] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40463 2023-05-24 21:54:12,730 WARN [Listener at localhost.localdomain/42335] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:12,836 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59fb59be76cdf5a2: Processing first storage report for DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34 from datanode 202e8ffd-9195-48d0-8926-e4e2dfd44817 2023-05-24 21:54:12,836 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59fb59be76cdf5a2: from storage DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34 node DatanodeRegistration(127.0.0.1:41849, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=39215, infoSecurePort=0, ipcPort=42335, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:12,836 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59fb59be76cdf5a2: Processing first storage report for DS-51739db5-cf27-4f37-ac71-9c445c4b90c1 from datanode 202e8ffd-9195-48d0-8926-e4e2dfd44817 2023-05-24 21:54:12,836 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59fb59be76cdf5a2: from storage DS-51739db5-cf27-4f37-ac71-9c445c4b90c1 node DatanodeRegistration(127.0.0.1:41849, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=39215, infoSecurePort=0, ipcPort=42335, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:12,842 DEBUG [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843 2023-05-24 21:54:12,845 INFO [Listener at localhost.localdomain/42335] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/zookeeper_0, clientPort=58580, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:54:12,846 INFO [Listener at localhost.localdomain/42335] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58580 2023-05-24 21:54:12,847 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,848 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,867 INFO [Listener at localhost.localdomain/42335] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca with version=8 2023-05-24 21:54:12,867 INFO [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:54:12,869 INFO [Listener at localhost.localdomain/42335] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:54:12,869 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,869 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,869 INFO [Listener at localhost.localdomain/42335] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:54:12,870 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,870 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:54:12,870 INFO [Listener at localhost.localdomain/42335] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:54:12,871 INFO [Listener at localhost.localdomain/42335] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38657 2023-05-24 21:54:12,871 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,872 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,873 INFO [Listener at localhost.localdomain/42335] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38657 connecting to ZooKeeper ensemble=127.0.0.1:58580 2023-05-24 21:54:12,878 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:386570x0, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:54:12,879 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38657-0x1017f7777180000 connected 2023-05-24 21:54:12,894 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:54:12,894 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:54:12,895 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:54:12,895 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-24 21:54:12,895 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38657 2023-05-24 21:54:12,896 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38657 2023-05-24 21:54:12,896 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-24 21:54:12,896 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38657 2023-05-24 21:54:12,896 INFO [Listener at localhost.localdomain/42335] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca, hbase.cluster.distributed=false 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:54:12,909 INFO [Listener at localhost.localdomain/42335] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:54:12,911 INFO [Listener at localhost.localdomain/42335] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33137 2023-05-24 21:54:12,911 INFO [Listener at localhost.localdomain/42335] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:54:12,912 DEBUG [Listener at localhost.localdomain/42335] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:54:12,912 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,913 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,914 INFO [Listener at localhost.localdomain/42335] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33137 connecting to ZooKeeper ensemble=127.0.0.1:58580 2023-05-24 21:54:12,926 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:331370x0, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:54:12,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33137-0x1017f7777180001 connected 2023-05-24 21:54:12,927 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:54:12,928 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:54:12,929 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:54:12,931 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33137 2023-05-24 21:54:12,931 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33137 2023-05-24 21:54:12,931 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33137 2023-05-24 21:54:12,932 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33137 2023-05-24 21:54:12,932 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33137 2023-05-24 21:54:12,933 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:12,935 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:54:12,935 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:12,936 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:54:12,937 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:54:12,937 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:12,937 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:54:12,938 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,38657,1684965252869 from backup master directory 2023-05-24 21:54:12,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:54:12,940 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:12,940 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:54:12,940 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:54:12,940 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:12,956 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/hbase.id with ID: 34738157-4218-4b1e-8dc0-830826848ed4 2023-05-24 21:54:12,970 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:12,973 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:12,987 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x57f1cb84 to 127.0.0.1:58580 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:54:12,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61fd25aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:54:12,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:54:12,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:54:12,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:54:12,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store-tmp 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:54:13,003 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:13,003 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:54:13,003 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:54:13,004 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:13,007 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38657%2C1684965252869, suffix=, logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869, archiveDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/oldWALs, maxLogs=10 2023-05-24 21:54:13,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965253007 2023-05-24 21:54:13,014 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] 2023-05-24 21:54:13,014 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:54:13,014 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,016 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,018 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:54:13,019 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:54:13,020 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,021 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,021 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,025 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:54:13,028 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:54:13,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=777190, jitterRate=-0.01175256073474884}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:54:13,029 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:54:13,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:54:13,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:54:13,032 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:54:13,032 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:54:13,033 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-05-24 21:54:13,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 21:54:13,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:54:13,036 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:54:13,038 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:54:13,050 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:54:13,050 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:54:13,051 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:54:13,051 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:54:13,051 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:54:13,053 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:13,054 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:54:13,054 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:54:13,055 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:54:13,056 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:54:13,056 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:54:13,056 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:13,056 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,38657,1684965252869, sessionid=0x1017f7777180000, setting cluster-up flag (Was=false) 2023-05-24 21:54:13,059 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:13,061 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:54:13,062 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:13,064 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:13,067 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:54:13,068 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:13,068 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.hbase-snapshot/.tmp 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:54:13,071 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965283077 2023-05-24 21:54:13,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:54:13,077 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:54:13,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:54:13,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:54:13,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:54:13,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:54:13,078 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,078 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:54:13,078 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:54:13,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:54:13,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:54:13,080 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:54:13,081 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:54:13,082 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:54:13,082 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:54:13,086 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965253082,5,FailOnTimeoutGroup] 2023-05-24 21:54:13,087 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965253087,5,FailOnTimeoutGroup] 2023-05-24 21:54:13,087 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,087 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:54:13,087 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,087 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,101 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:54:13,102 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:54:13,102 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca 2023-05-24 21:54:13,113 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:54:13,116 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/info 2023-05-24 21:54:13,116 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:54:13,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,117 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:54:13,119 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:54:13,119 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:54:13,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:54:13,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/table 2023-05-24 21:54:13,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:54:13,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,124 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740 2023-05-24 21:54:13,125 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740 2023-05-24 21:54:13,127 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:54:13,128 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:54:13,130 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:54:13,131 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=825513, jitterRate=0.0496944934129715}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:54:13,131 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:54:13,131 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:54:13,131 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:54:13,132 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:54:13,132 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:54:13,133 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:54:13,134 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(951): ClusterId : 34738157-4218-4b1e-8dc0-830826848ed4 2023-05-24 21:54:13,135 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:54:13,135 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:54:13,136 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:54:13,137 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:54:13,137 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:54:13,139 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:54:13,140 DEBUG [RS:0;jenkins-hbase20:33137] zookeeper.ReadOnlyZKClient(139): Connect 0x4481509b to 127.0.0.1:58580 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:54:13,143 DEBUG [RS:0;jenkins-hbase20:33137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a46b67b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:54:13,143 DEBUG [RS:0;jenkins-hbase20:33137] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e793beb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:54:13,150 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:33137 2023-05-24 21:54:13,151 INFO [RS:0;jenkins-hbase20:33137] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:54:13,151 INFO [RS:0;jenkins-hbase20:33137] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:54:13,151 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:54:13,151 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,38657,1684965252869 with isa=jenkins-hbase20.apache.org/148.251.75.209:33137, startcode=1684965252908 2023-05-24 21:54:13,152 DEBUG [RS:0;jenkins-hbase20:33137] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:54:13,156 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:33523, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:54:13,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,158 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca 2023-05-24 21:54:13,158 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43361 2023-05-24 21:54:13,158 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:54:13,160 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:54:13,161 DEBUG [RS:0;jenkins-hbase20:33137] zookeeper.ZKUtil(162): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,161 WARN [RS:0;jenkins-hbase20:33137] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:54:13,161 INFO [RS:0;jenkins-hbase20:33137] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:54:13,161 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,161 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,33137,1684965252908] 2023-05-24 21:54:13,165 DEBUG [RS:0;jenkins-hbase20:33137] zookeeper.ZKUtil(162): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,167 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:54:13,167 INFO [RS:0;jenkins-hbase20:33137] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:54:13,170 INFO [RS:0;jenkins-hbase20:33137] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:54:13,173 INFO [RS:0;jenkins-hbase20:33137] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:54:13,173 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,174 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:54:13,175 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,175 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:54:13,176 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,176 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,176 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,176 DEBUG [RS:0;jenkins-hbase20:33137] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:13,176 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,177 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,177 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,186 INFO [RS:0;jenkins-hbase20:33137] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:54:13,187 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33137,1684965252908-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,206 INFO [RS:0;jenkins-hbase20:33137] regionserver.Replication(203): jenkins-hbase20.apache.org,33137,1684965252908 started 2023-05-24 21:54:13,206 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,33137,1684965252908, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:33137, sessionid=0x1017f7777180001 2023-05-24 21:54:13,206 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:54:13,206 DEBUG [RS:0;jenkins-hbase20:33137] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,206 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33137,1684965252908' 2023-05-24 21:54:13,206 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:54:13,207 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33137,1684965252908' 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:54:13,208 DEBUG [RS:0;jenkins-hbase20:33137] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:54:13,209 DEBUG [RS:0;jenkins-hbase20:33137] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:54:13,209 INFO [RS:0;jenkins-hbase20:33137] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:54:13,209 INFO [RS:0;jenkins-hbase20:33137] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:54:13,286 DEBUG [jenkins-hbase20:38657] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:54:13,287 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33137,1684965252908, state=OPENING 2023-05-24 21:54:13,289 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:54:13,289 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:13,290 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33137,1684965252908}] 2023-05-24 21:54:13,290 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:54:13,311 INFO [RS:0;jenkins-hbase20:33137] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33137%2C1684965252908, suffix=, logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908, archiveDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/oldWALs, maxLogs=32 2023-05-24 21:54:13,323 INFO [RS:0;jenkins-hbase20:33137] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965253313 2023-05-24 21:54:13,323 DEBUG [RS:0;jenkins-hbase20:33137] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] 2023-05-24 21:54:13,445 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,446 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:54:13,452 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:54:13,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:54:13,461 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:54:13,464 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908, archiveDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/oldWALs, maxLogs=32 2023-05-24 21:54:13,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965253467.meta 2023-05-24 21:54:13,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] 2023-05-24 21:54:13,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:54:13,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:54:13,482 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:54:13,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:54:13,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:54:13,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:54:13,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:54:13,485 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:54:13,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/info 2023-05-24 21:54:13,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/info 2023-05-24 21:54:13,488 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:54:13,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:54:13,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:54:13,490 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:54:13,491 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:54:13,491 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,492 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:54:13,493 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/table 2023-05-24 21:54:13,493 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740/table 2023-05-24 21:54:13,494 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:54:13,495 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,497 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740 2023-05-24 21:54:13,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/meta/1588230740 2023-05-24 21:54:13,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:54:13,504 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:54:13,505 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=874656, jitterRate=0.1121835708618164}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:54:13,505 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:54:13,507 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965253445 2023-05-24 21:54:13,511 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:54:13,512 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:54:13,513 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,33137,1684965252908, state=OPEN 2023-05-24 21:54:13,515 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:54:13,515 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:54:13,519 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:54:13,519 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,33137,1684965252908 in 225 msec 2023-05-24 21:54:13,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:54:13,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 386 msec 2023-05-24 21:54:13,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 455 msec 2023-05-24 21:54:13,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965253526, completionTime=-1 2023-05-24 21:54:13,526 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:54:13,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:54:13,530 DEBUG [hconnection-0x53f38209-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:54:13,532 INFO [RS-EventLoopGroup-6-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:54:13,534 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:54:13,534 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965313534 2023-05-24 21:54:13,534 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965373534 2023-05-24 21:54:13,534 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38657,1684965252869-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38657,1684965252869-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38657,1684965252869-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:38657, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:54:13,540 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:54:13,541 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:54:13,542 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:54:13,543 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:54:13,544 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:54:13,546 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,547 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7 empty. 2023-05-24 21:54:13,548 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,548 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:54:13,564 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:54:13,566 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fe99d63cb92056da15a4255b774359f7, NAME => 'hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fe99d63cb92056da15a4255b774359f7, disabling compactions & flushes 2023-05-24 21:54:13,576 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. after waiting 0 ms 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,576 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,576 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fe99d63cb92056da15a4255b774359f7: 2023-05-24 21:54:13,581 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:54:13,583 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965253583"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965253583"}]},"ts":"1684965253583"} 2023-05-24 21:54:13,586 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:54:13,587 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:54:13,588 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965253587"}]},"ts":"1684965253587"} 2023-05-24 21:54:13,589 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:54:13,593 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fe99d63cb92056da15a4255b774359f7, ASSIGN}] 2023-05-24 21:54:13,596 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fe99d63cb92056da15a4255b774359f7, ASSIGN 2023-05-24 21:54:13,597 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fe99d63cb92056da15a4255b774359f7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33137,1684965252908; forceNewPlan=false, retain=false 2023-05-24 21:54:13,748 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fe99d63cb92056da15a4255b774359f7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,749 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965253748"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965253748"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965253748"}]},"ts":"1684965253748"} 2023-05-24 21:54:13,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fe99d63cb92056da15a4255b774359f7, server=jenkins-hbase20.apache.org,33137,1684965252908}] 2023-05-24 21:54:13,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,918 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fe99d63cb92056da15a4255b774359f7, NAME => 'hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:54:13,918 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:13,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,919 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,922 INFO [StoreOpener-fe99d63cb92056da15a4255b774359f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,926 DEBUG [StoreOpener-fe99d63cb92056da15a4255b774359f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/info 2023-05-24 21:54:13,926 DEBUG [StoreOpener-fe99d63cb92056da15a4255b774359f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/info 2023-05-24 21:54:13,926 INFO [StoreOpener-fe99d63cb92056da15a4255b774359f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fe99d63cb92056da15a4255b774359f7 columnFamilyName info 2023-05-24 21:54:13,927 INFO [StoreOpener-fe99d63cb92056da15a4255b774359f7-1] regionserver.HStore(310): Store=fe99d63cb92056da15a4255b774359f7/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:13,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,929 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,933 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for fe99d63cb92056da15a4255b774359f7 2023-05-24 21:54:13,935 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:54:13,936 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened fe99d63cb92056da15a4255b774359f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=729688, jitterRate=-0.07215449213981628}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:54:13,936 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for fe99d63cb92056da15a4255b774359f7: 2023-05-24 21:54:13,938 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7., pid=6, masterSystemTime=1684965253908 2023-05-24 21:54:13,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,940 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:54:13,941 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fe99d63cb92056da15a4255b774359f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:13,941 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965253941"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965253941"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965253941"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965253941"}]},"ts":"1684965253941"} 2023-05-24 21:54:13,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:54:13,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fe99d63cb92056da15a4255b774359f7, server=jenkins-hbase20.apache.org,33137,1684965252908 in 189 msec 2023-05-24 21:54:13,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:54:13,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fe99d63cb92056da15a4255b774359f7, ASSIGN in 353 msec 2023-05-24 21:54:13,950 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:54:13,950 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965253950"}]},"ts":"1684965253950"} 2023-05-24 21:54:13,953 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:54:13,955 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:54:13,956 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 415 msec 2023-05-24 21:54:14,043 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:54:14,044 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:54:14,044 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:14,049 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:54:14,061 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:54:14,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-05-24 21:54:14,073 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:54:14,084 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:54:14,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-05-24 21:54:14,101 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:54:14,103 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.163sec 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38657,1684965252869-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:54:14,103 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38657,1684965252869-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:54:14,106 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:54:14,135 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ReadOnlyZKClient(139): Connect 0x592c7b1a to 127.0.0.1:58580 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:54:14,143 DEBUG [Listener at localhost.localdomain/42335] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e908e6b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:54:14,147 DEBUG [hconnection-0x6a7beae4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:54:14,151 INFO [RS-EventLoopGroup-6-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:54:14,155 INFO [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:54:14,156 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:14,160 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:54:14,160 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:54:14,161 INFO [Listener at localhost.localdomain/42335] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:54:14,170 INFO [Listener at localhost.localdomain/42335] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:54:14,172 INFO [Listener at localhost.localdomain/42335] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33985 2023-05-24 21:54:14,172 INFO [Listener at localhost.localdomain/42335] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:54:14,173 DEBUG [Listener at localhost.localdomain/42335] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:54:14,173 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:14,174 INFO [Listener at localhost.localdomain/42335] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:54:14,175 INFO [Listener at localhost.localdomain/42335] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33985 connecting to ZooKeeper ensemble=127.0.0.1:58580 2023-05-24 21:54:14,177 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:339850x0, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:54:14,178 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(162): regionserver:339850x0, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:54:14,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33985-0x1017f7777180005 connected 2023-05-24 21:54:14,180 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(162): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-05-24 21:54:14,180 DEBUG [Listener at localhost.localdomain/42335] zookeeper.ZKUtil(164): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:54:14,181 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33985 2023-05-24 21:54:14,181 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33985 2023-05-24 21:54:14,181 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33985 2023-05-24 21:54:14,181 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33985 2023-05-24 21:54:14,182 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33985 2023-05-24 21:54:14,183 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(951): ClusterId : 34738157-4218-4b1e-8dc0-830826848ed4 2023-05-24 21:54:14,184 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:54:14,190 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:54:14,190 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:54:14,191 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:54:14,192 DEBUG [RS:1;jenkins-hbase20:33985] zookeeper.ReadOnlyZKClient(139): Connect 0x4077225a to 127.0.0.1:58580 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:54:14,196 DEBUG [RS:1;jenkins-hbase20:33985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4eb0d424, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:54:14,196 DEBUG [RS:1;jenkins-hbase20:33985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a8f492a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:54:14,203 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:33985 2023-05-24 21:54:14,204 INFO [RS:1;jenkins-hbase20:33985] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:54:14,204 INFO [RS:1;jenkins-hbase20:33985] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:54:14,204 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:54:14,205 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,38657,1684965252869 with isa=jenkins-hbase20.apache.org/148.251.75.209:33985, startcode=1684965254169 2023-05-24 21:54:14,205 DEBUG [RS:1;jenkins-hbase20:33985] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:54:14,208 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39751, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:54:14,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,209 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca 2023-05-24 21:54:14,209 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43361 2023-05-24 21:54:14,209 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:54:14,210 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:54:14,210 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:54:14,210 DEBUG [RS:1;jenkins-hbase20:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,210 WARN [RS:1;jenkins-hbase20:33985] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:54:14,210 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,33985,1684965254169] 2023-05-24 21:54:14,210 INFO [RS:1;jenkins-hbase20:33985] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:54:14,210 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:14,210 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,211 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,214 DEBUG [RS:1;jenkins-hbase20:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:14,215 DEBUG [RS:1;jenkins-hbase20:33985] zookeeper.ZKUtil(162): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,215 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:54:14,216 INFO [RS:1;jenkins-hbase20:33985] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:54:14,218 INFO [RS:1;jenkins-hbase20:33985] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:54:14,219 INFO [RS:1;jenkins-hbase20:33985] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:54:14,219 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,220 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:54:14,221 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,221 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:54:14,222 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,222 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,222 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,222 DEBUG [RS:1;jenkins-hbase20:33985] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:54:14,223 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,223 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,223 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,232 INFO [RS:1;jenkins-hbase20:33985] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:54:14,232 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33985,1684965254169-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:54:14,241 INFO [RS:1;jenkins-hbase20:33985] regionserver.Replication(203): jenkins-hbase20.apache.org,33985,1684965254169 started 2023-05-24 21:54:14,241 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,33985,1684965254169, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:33985, sessionid=0x1017f7777180005 2023-05-24 21:54:14,241 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:54:14,241 INFO [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(3254): Started new server=Thread[RS:1;jenkins-hbase20:33985,5,FailOnTimeoutGroup] 2023-05-24 21:54:14,241 DEBUG [RS:1;jenkins-hbase20:33985] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,242 INFO [Listener at localhost.localdomain/42335] wal.TestLogRolling(323): Replication=2 2023-05-24 21:54:14,242 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33985,1684965254169' 2023-05-24 21:54:14,242 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:54:14,243 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:54:14,244 DEBUG [Listener at localhost.localdomain/42335] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 21:54:14,244 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:54:14,244 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:54:14,244 DEBUG [RS:1;jenkins-hbase20:33985] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:54:14,245 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33985,1684965254169' 2023-05-24 21:54:14,245 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:54:14,245 DEBUG [RS:1;jenkins-hbase20:33985] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:54:14,245 DEBUG [RS:1;jenkins-hbase20:33985] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:54:14,245 INFO [RS:1;jenkins-hbase20:33985] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:54:14,246 INFO [RS:1;jenkins-hbase20:33985] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:54:14,247 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46956, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 21:54:14,249 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 21:54:14,249 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 21:54:14,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:54:14,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath 2023-05-24 21:54:14,253 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:54:14,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnDatanodeDeath" procId is: 9 2023-05-24 21:54:14,254 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:54:14,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:54:14,256 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,257 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b empty. 2023-05-24 21:54:14,257 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,257 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnDatanodeDeath regions 2023-05-24 21:54:14,273 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp/data/default/TestLogRolling-testLogRollOnDatanodeDeath/.tabledesc/.tableinfo.0000000001 2023-05-24 21:54:14,274 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(7675): creating {ENCODED => af16655b8851ff0a73ed3cff71bd2c3b, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnDatanodeDeath', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/.tmp 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1604): Closing af16655b8851ff0a73ed3cff71bd2c3b, disabling compactions & flushes 2023-05-24 21:54:14,283 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. after waiting 0 ms 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,283 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,283 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnDatanodeDeath-pool-0] regionserver.HRegion(1558): Region close journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:54:14,286 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:54:14,288 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684965254288"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965254288"}]},"ts":"1684965254288"} 2023-05-24 21:54:14,290 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:54:14,291 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:54:14,292 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965254291"}]},"ts":"1684965254291"} 2023-05-24 21:54:14,293 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLING in hbase:meta 2023-05-24 21:54:14,300 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-05-24 21:54:14,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-05-24 21:54:14,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-05-24 21:54:14,302 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-05-24 21:54:14,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=af16655b8851ff0a73ed3cff71bd2c3b, ASSIGN}] 2023-05-24 21:54:14,306 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=af16655b8851ff0a73ed3cff71bd2c3b, ASSIGN 2023-05-24 21:54:14,307 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=af16655b8851ff0a73ed3cff71bd2c3b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33137,1684965252908; forceNewPlan=false, retain=false 2023-05-24 21:54:14,350 INFO [RS:1;jenkins-hbase20:33985] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33985%2C1684965254169, suffix=, logDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33985,1684965254169, archiveDir=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/oldWALs, maxLogs=32 2023-05-24 21:54:14,367 INFO [RS:1;jenkins-hbase20:33985] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33985,1684965254169/jenkins-hbase20.apache.org%2C33985%2C1684965254169.1684965254352 2023-05-24 21:54:14,367 DEBUG [RS:1;jenkins-hbase20:33985] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK], DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]] 2023-05-24 21:54:14,460 INFO [jenkins-hbase20:38657] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-05-24 21:54:14,461 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=af16655b8851ff0a73ed3cff71bd2c3b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:14,461 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684965254461"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965254461"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965254461"}]},"ts":"1684965254461"} 2023-05-24 21:54:14,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure af16655b8851ff0a73ed3cff71bd2c3b, server=jenkins-hbase20.apache.org,33137,1684965252908}] 2023-05-24 21:54:14,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => af16655b8851ff0a73ed3cff71bd2c3b, NAME => 'TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:54:14,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnDatanodeDeath af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:54:14,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,626 INFO [StoreOpener-af16655b8851ff0a73ed3cff71bd2c3b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,628 DEBUG [StoreOpener-af16655b8851ff0a73ed3cff71bd2c3b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info 2023-05-24 21:54:14,628 DEBUG [StoreOpener-af16655b8851ff0a73ed3cff71bd2c3b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info 2023-05-24 21:54:14,628 INFO [StoreOpener-af16655b8851ff0a73ed3cff71bd2c3b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region af16655b8851ff0a73ed3cff71bd2c3b columnFamilyName info 2023-05-24 21:54:14,629 INFO [StoreOpener-af16655b8851ff0a73ed3cff71bd2c3b-1] regionserver.HStore(310): Store=af16655b8851ff0a73ed3cff71bd2c3b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:54:14,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:14,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:54:14,639 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened af16655b8851ff0a73ed3cff71bd2c3b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=742054, jitterRate=-0.05642960965633392}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:54:14,639 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:54:14,640 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b., pid=11, masterSystemTime=1684965254618 2023-05-24 21:54:14,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,642 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:14,643 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=af16655b8851ff0a73ed3cff71bd2c3b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:54:14,643 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.","families":{"info":[{"qualifier":"regioninfo","vlen":75,"tag":[],"timestamp":"1684965254643"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965254643"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965254643"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965254643"}]},"ts":"1684965254643"} 2023-05-24 21:54:14,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 21:54:14,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure af16655b8851ff0a73ed3cff71bd2c3b, server=jenkins-hbase20.apache.org,33137,1684965252908 in 182 msec 2023-05-24 21:54:14,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 21:54:14,655 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnDatanodeDeath, region=af16655b8851ff0a73ed3cff71bd2c3b, ASSIGN in 347 msec 2023-05-24 21:54:14,656 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:54:14,656 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnDatanodeDeath","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965254656"}]},"ts":"1684965254656"} 2023-05-24 21:54:14,658 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnDatanodeDeath, state=ENABLED in hbase:meta 2023-05-24 21:54:14,661 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:54:14,663 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnDatanodeDeath in 412 msec 2023-05-24 21:54:16,725 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:54:19,167 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 21:54:19,168 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 21:54:19,169 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnDatanodeDeath' 2023-05-24 21:54:24,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:54:24,256 INFO [Listener at localhost.localdomain/42335] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnDatanodeDeath, procId: 9 completed 2023-05-24 21:54:24,259 DEBUG [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnDatanodeDeath 2023-05-24 21:54:24,259 DEBUG [Listener at localhost.localdomain/42335] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:54:24,271 WARN [Listener at localhost.localdomain/42335] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:24,274 WARN [Listener at localhost.localdomain/42335] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:24,275 INFO [Listener at localhost.localdomain/42335] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:24,280 INFO [Listener at localhost.localdomain/42335] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_46685_datanode____kx80go/webapp 2023-05-24 21:54:24,376 INFO [Listener at localhost.localdomain/42335] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46685 2023-05-24 21:54:24,387 WARN [Listener at localhost.localdomain/37643] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:24,412 WARN [Listener at localhost.localdomain/37643] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:24,416 WARN [Listener at localhost.localdomain/37643] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:24,419 INFO [Listener at localhost.localdomain/37643] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:24,427 INFO [Listener at localhost.localdomain/37643] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_41227_datanode____3veb0h/webapp 2023-05-24 21:54:24,488 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x29cb6c99c5f73fe7: Processing first storage report for DS-1f790304-1703-4fc3-92f8-00a26cb06740 from datanode 0881da4e-e7b7-447f-a025-2cc29c1c3e71 2023-05-24 21:54:24,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x29cb6c99c5f73fe7: from storage DS-1f790304-1703-4fc3-92f8-00a26cb06740 node DatanodeRegistration(127.0.0.1:33319, datanodeUuid=0881da4e-e7b7-447f-a025-2cc29c1c3e71, infoPort=40367, infoSecurePort=0, ipcPort=37643, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,489 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x29cb6c99c5f73fe7: Processing first storage report for DS-b30f5c08-2080-45db-8d89-8effa2537dba from datanode 0881da4e-e7b7-447f-a025-2cc29c1c3e71 2023-05-24 21:54:24,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x29cb6c99c5f73fe7: from storage DS-b30f5c08-2080-45db-8d89-8effa2537dba node DatanodeRegistration(127.0.0.1:33319, datanodeUuid=0881da4e-e7b7-447f-a025-2cc29c1c3e71, infoPort=40367, infoSecurePort=0, ipcPort=37643, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,515 INFO [Listener at localhost.localdomain/37643] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41227 2023-05-24 21:54:24,526 WARN [Listener at localhost.localdomain/44029] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:24,552 WARN [Listener at localhost.localdomain/44029] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:24,555 WARN [Listener at localhost.localdomain/44029] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:24,556 INFO [Listener at localhost.localdomain/44029] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:24,574 INFO [Listener at localhost.localdomain/44029] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_42407_datanode____.90kdr0/webapp 2023-05-24 21:54:24,635 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd602398119d3a20: Processing first storage report for DS-78946595-97ce-489d-beca-26e6ba27b9e9 from datanode 9c67f7e4-f22f-4624-ae40-abb743cdd5d7 2023-05-24 21:54:24,635 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd602398119d3a20: from storage DS-78946595-97ce-489d-beca-26e6ba27b9e9 node DatanodeRegistration(127.0.0.1:42331, datanodeUuid=9c67f7e4-f22f-4624-ae40-abb743cdd5d7, infoPort=35615, infoSecurePort=0, ipcPort=44029, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,635 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd602398119d3a20: Processing first storage report for DS-6d142cec-e85f-4954-8891-0d3be049ea57 from datanode 9c67f7e4-f22f-4624-ae40-abb743cdd5d7 2023-05-24 21:54:24,635 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd602398119d3a20: from storage DS-6d142cec-e85f-4954-8891-0d3be049ea57 node DatanodeRegistration(127.0.0.1:42331, datanodeUuid=9c67f7e4-f22f-4624-ae40-abb743cdd5d7, infoPort=35615, infoSecurePort=0, ipcPort=44029, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,675 INFO [Listener at localhost.localdomain/44029] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42407 2023-05-24 21:54:24,687 WARN [Listener at localhost.localdomain/40255] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:24,827 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6d0c14b62fe7507c: Processing first storage report for DS-c53fc18d-11c6-418d-a134-1de000eeb380 from datanode 81703d63-edb3-45f9-a49d-b750f462f054 2023-05-24 21:54:24,827 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6d0c14b62fe7507c: from storage DS-c53fc18d-11c6-418d-a134-1de000eeb380 node DatanodeRegistration(127.0.0.1:46531, datanodeUuid=81703d63-edb3-45f9-a49d-b750f462f054, infoPort=35269, infoSecurePort=0, ipcPort=40255, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,827 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6d0c14b62fe7507c: Processing first storage report for DS-0d082d89-d18e-4160-a6c1-2b2ec0815f44 from datanode 81703d63-edb3-45f9-a49d-b750f462f054 2023-05-24 21:54:24,827 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6d0c14b62fe7507c: from storage DS-0d082d89-d18e-4160-a6c1-2b2ec0815f44 node DatanodeRegistration(127.0.0.1:46531, datanodeUuid=81703d63-edb3-45f9-a49d-b750f462f054, infoPort=35269, infoSecurePort=0, ipcPort=40255, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:24,839 WARN [Listener at localhost.localdomain/40255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:24,841 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,841 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,842 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,842 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965253007 block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]) is bad. 2023-05-24 21:54:24,842 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965253467.meta block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]) is bad. 2023-05-24 21:54:24,842 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965253313 block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]) is bad. 2023-05-24 21:54:24,851 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014 java.io.IOException: Bad response ERROR for BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014 from datanode DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 21:54:24,851 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33985,1684965254169/jenkins-hbase20.apache.org%2C33985%2C1684965254169.1684965254352 block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014 in pipeline [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK], DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK]) is bad. 2023-05-24 21:54:24,854 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:46768 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46768 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34865 remote=/127.0.0.1:46768]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,855 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41849]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,858 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34865]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,859 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:24,861 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:52984 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41849:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52984 dst: /127.0.0.1:41849 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,861 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_946975500_17 at /127.0.0.1:46826 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46826 dst: /127.0.0.1:34865 java.io.IOException: Interrupted receiveBlock at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:1067) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,868 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:46790 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46790 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34865 remote=/127.0.0.1:46790]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,872 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34865]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,872 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:34865]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,868 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:46792 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46792 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:34865 remote=/127.0.0.1:46792]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,874 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:53038 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41849:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53038 dst: /127.0.0.1:41849 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,874 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:53024 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41849:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53024 dst: /127.0.0.1:41849 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,963 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_946975500_17 at /127.0.0.1:53086 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:41849:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:53086 dst: /127.0.0.1:41849 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:24,963 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:24,964 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 202e8ffd-9195-48d0-8926-e4e2dfd44817) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:54:24,965 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data3/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:24,965 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data4/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:24,967 WARN [Listener at localhost.localdomain/40255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:24,967 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,968 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,967 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,967 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1017 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:24,975 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:25,078 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_946975500_17 at /127.0.0.1:35802 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741838_1014]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35802 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:25,080 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:35788 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35788 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:25,081 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:35806 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35806 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:25,081 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:35790 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:34865:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:35790 dst: /127.0.0.1:34865 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:25,082 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:25,083 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 3bcab2c3-fdd7-4742-b150-f7b0f82901e7) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:54:25,084 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data1/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:25,085 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data2/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:25,091 WARN [RS:0;jenkins-hbase20:33137.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=4, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:25,092 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33137%2C1684965252908:(num 1684965253313) roll requested 2023-05-24 21:54:25,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33137] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:25,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33137] ipc.CallRunner(144): callId: 9 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:34304 deadline: 1684965275090, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-24 21:54:25,119 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=4, requesting roll of WAL 2023-05-24 21:54:25,119 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965253313 with entries=4, filesize=985 B; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 2023-05-24 21:54:25,119 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK], DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]] 2023-05-24 21:54:25,119 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965253313 is not closed yet, will try archiving it next time 2023-05-24 21:54:25,119 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:25,119 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965253313; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:37,197 INFO [Listener at localhost.localdomain/40255] wal.TestLogRolling(375): log.getCurrentFileName(): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 2023-05-24 21:54:37,198 WARN [Listener at localhost.localdomain/40255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:37,207 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:54:37,207 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 block BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019 in pipeline [DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK], DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK]) is bad. 2023-05-24 21:54:37,244 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:37,247 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:33810 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:33319:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:33810 dst: /127.0.0.1:33319 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:33319 remote=/127.0.0.1:33810]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:37,248 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33319]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:37,249 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:41160 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1019]] datanode.DataXceiver(323): 127.0.0.1:42331:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41160 dst: /127.0.0.1:42331 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:37,349 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:37,349 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 9c67f7e4-f22f-4624-ae40-abb743cdd5d7) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:54:37,350 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data7/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:37,350 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data8/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:37,355 WARN [sync.1] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]] 2023-05-24 21:54:37,355 WARN [sync.1] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]] 2023-05-24 21:54:37,355 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33137%2C1684965252908:(num 1684965265092) roll requested 2023-05-24 21:54:37,363 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741840_1021 2023-05-24 21:54:37,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:34934 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741840_1021]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741840_1021 to mirror 127.0.0.1:41849: java.net.ConnectException: Connection refused 2023-05-24 21:54:37,366 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:34934 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741840_1021]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34934 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:37,370 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK] 2023-05-24 21:54:37,379 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:46904 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741841_1022]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data5/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data6/current]'}, localName='127.0.0.1:33319', datanodeUuid='0881da4e-e7b7-447f-a025-2cc29c1c3e71', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741841_1022 to mirror 127.0.0.1:34865: java.net.ConnectException: Connection refused 2023-05-24 21:54:37,379 WARN [Thread-638] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741841_1022 2023-05-24 21:54:37,380 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:46904 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741841_1022]] datanode.DataXceiver(323): 127.0.0.1:33319:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46904 dst: /127.0.0.1:33319 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:37,380 WARN [Thread-638] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK] 2023-05-24 21:54:37,393 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 with entries=2, filesize=2.36 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 2023-05-24 21:54:37,393 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK], DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]] 2023-05-24 21:54:37,394 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 is not closed yet, will try archiving it next time 2023-05-24 21:54:40,501 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@34a36312] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:33319, datanodeUuid=0881da4e-e7b7-447f-a025-2cc29c1c3e71, infoPort=40367, infoSecurePort=0, ipcPort=37643, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741839_1020 to 127.0.0.1:41849 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,361 WARN [Listener at localhost.localdomain/40255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:54:41,363 WARN [ResponseProcessor for block BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023 java.io.IOException: Bad response ERROR for BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023 from datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 21:54:41,364 WARN [DataStreamer for file /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 block BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023] hdfs.DataStreamer(1548): Error Recovery for BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023 in pipeline [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK], DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK]) is bad. 2023-05-24 21:54:41,364 WARN [PacketResponder: BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:33319]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,364 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:34946 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34946 dst: /127.0.0.1:46531 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,366 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:54:41,471 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:46906 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1023]] datanode.DataXceiver(323): 127.0.0.1:33319:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:46906 dst: /127.0.0.1:33319 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,473 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:54:41,473 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 0881da4e-e7b7-447f-a025-2cc29c1c3e71) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:54:41,475 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data5/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:41,475 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data6/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:54:41,481 WARN [sync.4] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,481 WARN [sync.4] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,481 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33137%2C1684965252908:(num 1684965277356) roll requested 2023-05-24 21:54:41,485 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741843_1025 2023-05-24 21:54:41,486 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK] 2023-05-24 21:54:41,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33137] regionserver.HRegion(9158): Flush requested on af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:54:41,488 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741844_1026 2023-05-24 21:54:41,488 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing af16655b8851ff0a73ed3cff71bd2c3b 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:54:41,490 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK] 2023-05-24 21:54:41,493 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45286 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741845_1027]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741845_1027 to mirror 127.0.0.1:33319: java.net.ConnectException: Connection refused 2023-05-24 21:54:41,494 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741845_1027 2023-05-24 21:54:41,494 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45286 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741845_1027]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45286 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,495 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:54:41,499 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45294 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741846_1028]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741846_1028 to mirror 127.0.0.1:41849: java.net.ConnectException: Connection refused 2023-05-24 21:54:41,499 WARN [Thread-652] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741846_1028 2023-05-24 21:54:41,499 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45294 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741846_1028]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45294 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,500 WARN [Thread-652] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK] 2023-05-24 21:54:41,500 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45296 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741847_1029]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741847_1029 to mirror 127.0.0.1:33319: java.net.ConnectException: Connection refused 2023-05-24 21:54:41,500 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741847_1029 2023-05-24 21:54:41,501 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45296 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741847_1029]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45296 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,501 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:54:41,501 WARN [IPC Server handler 2 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 21:54:41,501 WARN [IPC Server handler 2 on default port 43361] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 21:54:41,501 WARN [IPC Server handler 2 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 21:54:41,502 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741849_1031 2023-05-24 21:54:41,503 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK] 2023-05-24 21:54:41,505 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741850_1032 2023-05-24 21:54:41,506 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK] 2023-05-24 21:54:41,509 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 with entries=13, filesize=14.09 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281481 2023-05-24 21:54:41,509 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,509 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 is not closed yet, will try archiving it next time 2023-05-24 21:54:41,512 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45304 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741851_1033]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741851_1033 to mirror 127.0.0.1:42331: java.net.ConnectException: Connection refused 2023-05-24 21:54:41,512 WARN [Thread-654] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741851_1033 2023-05-24 21:54:41,513 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1178358800_17 at /127.0.0.1:45304 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741851_1033]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45304 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:41,513 WARN [Thread-654] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK] 2023-05-24 21:54:41,514 WARN [IPC Server handler 1 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 21:54:41,514 WARN [IPC Server handler 1 on default port 43361] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 21:54:41,514 WARN [IPC Server handler 1 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 21:54:41,707 WARN [sync.2] wal.FSHLog(747): HDFS pipeline error detected. Found 1 replicas but expecting no less than 2 replicas. Requesting close of WAL. current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,707 WARN [sync.2] wal.FSHLog(718): Requesting log roll because of low replication, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,707 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33137%2C1684965252908:(num 1684965281481) roll requested 2023-05-24 21:54:41,709 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741853_1035 2023-05-24 21:54:41,710 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK] 2023-05-24 21:54:41,711 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741854_1036 2023-05-24 21:54:41,711 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:54:41,712 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741855_1037 2023-05-24 21:54:41,713 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK] 2023-05-24 21:54:41,714 WARN [Thread-664] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741856_1038 2023-05-24 21:54:41,714 WARN [Thread-664] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:41849,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK] 2023-05-24 21:54:41,715 WARN [IPC Server handler 4 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-05-24 21:54:41,715 WARN [IPC Server handler 4 on default port 43361] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-05-24 21:54:41,715 WARN [IPC Server handler 4 on default port 43361] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-05-24 21:54:41,720 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281481 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281707 2023-05-24 21:54:41,720 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:41,720 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 is not closed yet, will try archiving it next time 2023-05-24 21:54:41,720 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281481 is not closed yet, will try archiving it next time 2023-05-24 21:54:41,724 DEBUG [Close-WAL-Writer-1] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965277356 is not closed yet, will try archiving it next time 2023-05-24 21:54:41,911 WARN [sync.4] wal.FSHLog(757): Too many consecutive RollWriter requests, it's a sign of the total number of live datanodes is lower than the tolerable replicas. 2023-05-24 21:54:41,920 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=12 (bloomFilter=true), to=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/.tmp/info/37cdb6b48de142d4bd2b0dbca2b9743d 2023-05-24 21:54:41,933 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/.tmp/info/37cdb6b48de142d4bd2b0dbca2b9743d as hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info/37cdb6b48de142d4bd2b0dbca2b9743d 2023-05-24 21:54:41,948 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info/37cdb6b48de142d4bd2b0dbca2b9743d, entries=5, sequenceid=12, filesize=10.0 K 2023-05-24 21:54:41,949 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=9.45 KB/9681 for af16655b8851ff0a73ed3cff71bd2c3b in 461ms, sequenceid=12, compaction requested=false 2023-05-24 21:54:41,949 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:54:42,119 WARN [Listener at localhost.localdomain/40255] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:54:42,122 WARN [Listener at localhost.localdomain/40255] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:54:42,123 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:54:42,130 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/java.io.tmpdir/Jetty_localhost_46231_datanode____.hlaaq3/webapp 2023-05-24 21:54:42,218 INFO [Listener at localhost.localdomain/40255] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46231 2023-05-24 21:54:42,225 WARN [Listener at localhost.localdomain/41955] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:54:42,340 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8b5a75d3c277391: Processing first storage report for DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34 from datanode 202e8ffd-9195-48d0-8926-e4e2dfd44817 2023-05-24 21:54:42,342 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8b5a75d3c277391: from storage DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34 node DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 7, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:54:42,342 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe8b5a75d3c277391: Processing first storage report for DS-51739db5-cf27-4f37-ac71-9c445c4b90c1 from datanode 202e8ffd-9195-48d0-8926-e4e2dfd44817 2023-05-24 21:54:42,342 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe8b5a75d3c277391: from storage DS-51739db5-cf27-4f37-ac71-9c445c4b90c1 node DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:54:42,825 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@47f55e14] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46531, datanodeUuid=81703d63-edb3-45f9-a49d-b750f462f054, infoPort=35269, infoSecurePort=0, ipcPort=40255, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741842_1024 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:42,826 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@166f2393] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:46531, datanodeUuid=81703d63-edb3-45f9-a49d-b750f462f054, infoPort=35269, infoSecurePort=0, ipcPort=40255, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741852_1034 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:43,079 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:43,080 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C38657%2C1684965252869:(num 1684965253007) roll requested 2023-05-24 21:54:43,085 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741858_1040 2023-05-24 21:54:43,085 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:43,086 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK] 2023-05-24 21:54:43,086 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:43,088 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741859_1041 2023-05-24 21:54:43,089 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:42331,DS-78946595-97ce-489d-beca-26e6ba27b9e9,DISK] 2023-05-24 21:54:43,092 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:45334 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741860_1042]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741860_1042 to mirror 127.0.0.1:33319: java.net.ConnectException: Connection refused 2023-05-24 21:54:43,092 WARN [Thread-703] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741860_1042 2023-05-24 21:54:43,092 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:45334 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741860_1042]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:45334 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:43,092 WARN [Thread-703] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:54:43,106 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-24 21:54:43,106 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965253007 with entries=88, filesize=43.75 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965283080 2023-05-24 21:54:43,106 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36485,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:54:43,106 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965253007 is not closed yet, will try archiving it next time 2023-05-24 21:54:43,106 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:43,108 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869/jenkins-hbase20.apache.org%2C38657%2C1684965252869.1684965253007; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:54:55,344 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@5c2311ea] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741836_1012 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:55,344 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@3e8df9d0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741834_1010 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:56,342 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@536d7c72] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741830_1006 to 127.0.0.1:42331 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:56,343 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@405137d0] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741828_1004 to 127.0.0.1:42331 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:58,343 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@439c271] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741827_1003 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:54:58,343 WARN [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@552bf85f] datanode.DataNode$DataTransfer(2503): DatanodeRegistration(127.0.0.1:36485, datanodeUuid=202e8ffd-9195-48d0-8926-e4e2dfd44817, infoPort=34075, infoSecurePort=0, ipcPort=41955, storageInfo=lv=-57;cid=testClusterID;nsid=1430337944;c=1684965252385):Failed to transfer BP-909052452-148.251.75.209-1684965252385:blk_1073741825_1001 to 127.0.0.1:33319 got java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2431) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:00,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:55614 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741862_1044]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current]'}, localName='127.0.0.1:46531', datanodeUuid='81703d63-edb3-45f9-a49d-b750f462f054', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741862_1044 to mirror 127.0.0.1:33319: java.net.ConnectException: Connection refused 2023-05-24 21:55:00,557 WARN [Thread-719] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741862_1044 2023-05-24 21:55:00,557 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:55614 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741862_1044]] datanode.DataXceiver(323): 127.0.0.1:46531:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:55614 dst: /127.0.0.1:46531 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:00,558 WARN [Thread-719] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:55:00,572 INFO [Listener at localhost.localdomain/41955] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281707 with entries=2, filesize=1.57 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965300551 2023-05-24 21:55:00,572 DEBUG [Listener at localhost.localdomain/41955] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36485,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:55:00,572 DEBUG [Listener at localhost.localdomain/41955] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965281707 is not closed yet, will try archiving it next time 2023-05-24 21:55:00,573 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 to hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/oldWALs/jenkins-hbase20.apache.org%2C33137%2C1684965252908.1684965265092 2023-05-24 21:55:00,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33137] regionserver.HRegion(9158): Flush requested on af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:55:00,580 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing af16655b8851ff0a73ed3cff71bd2c3b 1/1 column families, dataSize=10.50 KB heapSize=11.50 KB 2023-05-24 21:55:00,581 INFO [sync.3] wal.FSHLog(774): LowReplication-Roller was enabled. 2023-05-24 21:55:00,587 WARN [Thread-727] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741864_1046 2023-05-24 21:55:00,588 WARN [Thread-727] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:55:00,608 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:55:00,608 INFO [Listener at localhost.localdomain/41955] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 21:55:00,608 DEBUG [Listener at localhost.localdomain/41955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x592c7b1a to 127.0.0.1:58580 2023-05-24 21:55:00,608 DEBUG [Listener at localhost.localdomain/41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:00,609 DEBUG [Listener at localhost.localdomain/41955] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:55:00,609 DEBUG [Listener at localhost.localdomain/41955] util.JVMClusterUtil(257): Found active master hash=643031938, stopped=false 2023-05-24 21:55:00,609 INFO [Listener at localhost.localdomain/41955] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:55:00,610 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:00,611 INFO [Listener at localhost.localdomain/41955] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:55:00,611 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.50 KB at sequenceid=25 (bloomFilter=true), to=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/.tmp/info/49df157814924bc3aa8a64a2ed9ae167 2023-05-24 21:55:00,611 DEBUG [Listener at localhost.localdomain/41955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x57f1cb84 to 127.0.0.1:58580 2023-05-24 21:55:00,611 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:00,610 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:00,610 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:00,611 DEBUG [Listener at localhost.localdomain/41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:00,612 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:00,612 INFO [Listener at localhost.localdomain/41955] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33137,1684965252908' ***** 2023-05-24 21:55:00,612 INFO [Listener at localhost.localdomain/41955] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:55:00,612 INFO [Listener at localhost.localdomain/41955] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,33985,1684965254169' ***** 2023-05-24 21:55:00,612 INFO [Listener at localhost.localdomain/41955] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:55:00,612 INFO [RS:1;jenkins-hbase20:33985] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:55:00,612 INFO [RS:0;jenkins-hbase20:33137] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:55:00,612 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:55:00,612 INFO [RS:1;jenkins-hbase20:33985] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:55:00,613 INFO [RS:1;jenkins-hbase20:33985] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:55:00,613 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:55:00,613 DEBUG [RS:1;jenkins-hbase20:33985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4077225a to 127.0.0.1:58580 2023-05-24 21:55:00,613 DEBUG [RS:1;jenkins-hbase20:33985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:00,613 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:00,613 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:00,613 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33985,1684965254169; all regions closed. 2023-05-24 21:55:00,622 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:55:00,624 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,633 ERROR [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1539): Shutdown / close of WAL failed: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... 2023-05-24 21:55:00,633 DEBUG [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1540): Shutdown / close exception details: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,633 DEBUG [RS:1;jenkins-hbase20:33985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:00,633 INFO [RS:1;jenkins-hbase20:33985] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:00,633 INFO [RS:1;jenkins-hbase20:33985] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 21:55:00,633 INFO [RS:1;jenkins-hbase20:33985] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:55:00,633 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:55:00,633 INFO [RS:1;jenkins-hbase20:33985] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:55:00,633 INFO [RS:1;jenkins-hbase20:33985] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:55:00,634 INFO [RS:1;jenkins-hbase20:33985] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33985 2023-05-24 21:55:00,637 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:55:00,637 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:00,637 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33985,1684965254169 2023-05-24 21:55:00,637 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:00,637 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:00,638 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,33985,1684965254169] 2023-05-24 21:55:00,638 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,33985,1684965254169; numProcessing=1 2023-05-24 21:55:00,639 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,33985,1684965254169 already deleted, retry=false 2023-05-24 21:55:00,639 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,33985,1684965254169 expired; onlineServers=1 2023-05-24 21:55:00,640 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/.tmp/info/49df157814924bc3aa8a64a2ed9ae167 as hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info/49df157814924bc3aa8a64a2ed9ae167 2023-05-24 21:55:00,646 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info/49df157814924bc3aa8a64a2ed9ae167, entries=8, sequenceid=25, filesize=13.2 K 2023-05-24 21:55:00,648 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.50 KB/10757, heapSize ~11.48 KB/11760, currentSize=9.46 KB/9684 for af16655b8851ff0a73ed3cff71bd2c3b in 68ms, sequenceid=25, compaction requested=false 2023-05-24 21:55:00,648 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:55:00,648 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=23.2 K, sizeToCheck=16.0 K 2023-05-24 21:55:00,648 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:55:00,648 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/default/TestLogRolling-testLogRollOnDatanodeDeath/af16655b8851ff0a73ed3cff71bd2c3b/info/49df157814924bc3aa8a64a2ed9ae167 because midkey is the same as first or last row 2023-05-24 21:55:00,648 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:55:00,648 INFO [RS:0;jenkins-hbase20:33137] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:55:00,648 INFO [RS:0;jenkins-hbase20:33137] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:55:00,648 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(3303): Received CLOSE for fe99d63cb92056da15a4255b774359f7 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(3303): Received CLOSE for af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:55:00,649 DEBUG [RS:0;jenkins-hbase20:33137] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4481509b to 127.0.0.1:58580 2023-05-24 21:55:00,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing fe99d63cb92056da15a4255b774359f7, disabling compactions & flushes 2023-05-24 21:55:00,649 DEBUG [RS:0;jenkins-hbase20:33137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:00,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:55:00,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:55:00,649 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. after waiting 0 ms 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:55:00,650 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 21:55:00,650 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing fe99d63cb92056da15a4255b774359f7 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:55:00,650 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1478): Online Regions={fe99d63cb92056da15a4255b774359f7=hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7., 1588230740=hbase:meta,,1.1588230740, af16655b8851ff0a73ed3cff71bd2c3b=TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b.} 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:00,650 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1504): Waiting on 1588230740, af16655b8851ff0a73ed3cff71bd2c3b, fe99d63cb92056da15a4255b774359f7 2023-05-24 21:55:00,650 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:00,650 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:00,651 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.93 KB heapSize=5.45 KB 2023-05-24 21:55:00,651 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,651 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta:.meta(num 1684965253467) roll requested 2023-05-24 21:55:00,651 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:00,652 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,33137,1684965252908: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,653 ERROR [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-24 21:55:00,656 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-24 21:55:00,657 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-24 21:55:00,658 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-24 21:55:00,658 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-24 21:55:00,658 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1028653056, "init": 524288000, "max": 2051014656, "used": 314231064 }, "NonHeapMemoryUsage": { "committed": 133914624, "init": 2555904, "max": -1, "used": 131326912 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-24 21:55:00,658 WARN [Thread-735] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741867_1049 2023-05-24 21:55:00,659 WARN [Thread-735] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:55:00,660 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL 2023-05-24 21:55:00,660 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965253467.meta with entries=11, filesize=3.69 KB; new WAL /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965300651.meta 2023-05-24 21:55:00,660 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36485,DS-5a0f6c87-dfd5-40a0-bd43-074559e69b34,DISK], DatanodeInfoWithStorage[127.0.0.1:46531,DS-c53fc18d-11c6-418d-a134-1de000eeb380,DISK]] 2023-05-24 21:55:00,660 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965253467.meta is not closed yet, will try archiving it next time 2023-05-24 21:55:00,660 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,660 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908/jenkins-hbase20.apache.org%2C33137%2C1684965252908.meta.1684965253467.meta; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,666 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38657] master.MasterRpcServices(609): jenkins-hbase20.apache.org,33137,1684965252908 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,33137,1684965252908: Unrecoverable exception while closing hbase:meta,,1.1588230740 ***** Cause: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:34865,DS-68111bb2-9654-4ebb-81c4-cb070842208b,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:00,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/.tmp/info/86a03f11449b4c06914b2b9e0d7cbe0d 2023-05-24 21:55:00,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/.tmp/info/86a03f11449b4c06914b2b9e0d7cbe0d as hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/info/86a03f11449b4c06914b2b9e0d7cbe0d 2023-05-24 21:55:00,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/info/86a03f11449b4c06914b2b9e0d7cbe0d, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 21:55:00,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fe99d63cb92056da15a4255b774359f7 in 50ms, sequenceid=6, compaction requested=false 2023-05-24 21:55:00,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/data/hbase/namespace/fe99d63cb92056da15a4255b774359f7/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 21:55:00,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for fe99d63cb92056da15a4255b774359f7: 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684965253540.fe99d63cb92056da15a4255b774359f7. 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing af16655b8851ff0a73ed3cff71bd2c3b, disabling compactions & flushes 2023-05-24 21:55:00,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. after waiting 0 ms 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:55:00,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,850 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:55:00,851 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(3303): Received CLOSE for af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing af16655b8851ff0a73ed3cff71bd2c3b, disabling compactions & flushes 2023-05-24 21:55:00,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. after waiting 0 ms 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,851 DEBUG [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1504): Waiting on 1588230740, af16655b8851ff0a73ed3cff71bd2c3b 2023-05-24 21:55:00,851 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for af16655b8851ff0a73ed3cff71bd2c3b: 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnDatanodeDeath,,1684965254249.af16655b8851ff0a73ed3cff71bd2c3b. 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:00,851 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 21:55:00,911 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:00,912 INFO [RS:1;jenkins-hbase20:33985] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33985,1684965254169; zookeeper connection closed. 2023-05-24 21:55:00,912 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33985-0x1017f7777180005, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:00,913 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d742dd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d742dd 2023-05-24 21:55:01,051 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-24 21:55:01,051 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33137,1684965252908; all regions closed. 2023-05-24 21:55:01,052 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:55:01,058 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/WALs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:55:01,062 DEBUG [RS:0;jenkins-hbase20:33137] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:01,062 INFO [RS:0;jenkins-hbase20:33137] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:01,062 INFO [RS:0;jenkins-hbase20:33137] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 21:55:01,063 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:55:01,063 INFO [RS:0;jenkins-hbase20:33137] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33137 2023-05-24 21:55:01,065 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:01,065 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33137,1684965252908 2023-05-24 21:55:01,066 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,33137,1684965252908] 2023-05-24 21:55:01,066 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,33137,1684965252908; numProcessing=2 2023-05-24 21:55:01,067 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,33137,1684965252908 already deleted, retry=false 2023-05-24 21:55:01,067 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,33137,1684965252908 expired; onlineServers=0 2023-05-24 21:55:01,067 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,38657,1684965252869' ***** 2023-05-24 21:55:01,067 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:55:01,067 DEBUG [M:0;jenkins-hbase20:38657] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@226c7390, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:55:01,068 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:55:01,068 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38657,1684965252869; all regions closed. 2023-05-24 21:55:01,068 DEBUG [M:0;jenkins-hbase20:38657] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:01,068 DEBUG [M:0;jenkins-hbase20:38657] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:55:01,068 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:55:01,068 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965253087] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965253087,5,FailOnTimeoutGroup] 2023-05-24 21:55:01,068 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965253082] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965253082,5,FailOnTimeoutGroup] 2023-05-24 21:55:01,068 DEBUG [M:0;jenkins-hbase20:38657] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:55:01,069 INFO [M:0;jenkins-hbase20:38657] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:55:01,069 INFO [M:0;jenkins-hbase20:38657] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:55:01,069 INFO [M:0;jenkins-hbase20:38657] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:55:01,069 DEBUG [M:0;jenkins-hbase20:38657] master.HMaster(1512): Stopping service threads 2023-05-24 21:55:01,069 INFO [M:0;jenkins-hbase20:38657] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:55:01,070 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:01,070 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:01,070 ERROR [M:0;jenkins-hbase20:38657] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 21:55:01,070 INFO [M:0;jenkins-hbase20:38657] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:55:01,070 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:01,071 DEBUG [M:0;jenkins-hbase20:38657] zookeeper.ZKUtil(398): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:55:01,071 WARN [M:0;jenkins-hbase20:38657] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:55:01,071 INFO [M:0;jenkins-hbase20:38657] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:55:01,070 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:55:01,072 INFO [M:0;jenkins-hbase20:38657] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:55:01,072 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:55:01,072 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:01,072 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:01,072 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:55:01,072 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:01,072 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.12 KB heapSize=45.77 KB 2023-05-24 21:55:01,082 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:43844 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741869_1051]] datanode.DataXceiver(847): DataNode{data=FSDataset{dirpath='[/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data3/current, /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data4/current]'}, localName='127.0.0.1:36485', datanodeUuid='202e8ffd-9195-48d0-8926-e4e2dfd44817', xmitsInProgress=0}:Exception transfering block BP-909052452-148.251.75.209-1684965252385:blk_1073741869_1051 to mirror 127.0.0.1:33319: java.net.ConnectException: Connection refused 2023-05-24 21:55:01,082 WARN [Thread-750] hdfs.DataStreamer(1658): Abandoning BP-909052452-148.251.75.209-1684965252385:blk_1073741869_1051 2023-05-24 21:55:01,083 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_90449050_17 at /127.0.0.1:43844 [Receiving block BP-909052452-148.251.75.209-1684965252385:blk_1073741869_1051]] datanode.DataXceiver(323): 127.0.0.1:36485:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:43844 dst: /127.0.0.1:36485 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:769) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:01,083 WARN [Thread-750] hdfs.DataStreamer(1663): Excluding datanode DatanodeInfoWithStorage[127.0.0.1:33319,DS-1f790304-1703-4fc3-92f8-00a26cb06740,DISK] 2023-05-24 21:55:01,090 INFO [M:0;jenkins-hbase20:38657] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.12 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cf84810328874550a20caa2bcfeab565 2023-05-24 21:55:01,097 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/cf84810328874550a20caa2bcfeab565 as hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cf84810328874550a20caa2bcfeab565 2023-05-24 21:55:01,104 INFO [M:0;jenkins-hbase20:38657] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43361/user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/cf84810328874550a20caa2bcfeab565, entries=11, sequenceid=92, filesize=7.0 K 2023-05-24 21:55:01,105 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegion(2948): Finished flush of dataSize ~38.12 KB/39035, heapSize ~45.75 KB/46848, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 33ms, sequenceid=92, compaction requested=false 2023-05-24 21:55:01,106 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:01,106 DEBUG [M:0;jenkins-hbase20:38657] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:01,107 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/6fc60db2-b74c-d931-6a5c-0b253f68cfca/MasterData/WALs/jenkins-hbase20.apache.org,38657,1684965252869 2023-05-24 21:55:01,111 INFO [M:0;jenkins-hbase20:38657] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:55:01,111 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:55:01,111 INFO [M:0;jenkins-hbase20:38657] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38657 2023-05-24 21:55:01,112 DEBUG [M:0;jenkins-hbase20:38657] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,38657,1684965252869 already deleted, retry=false 2023-05-24 21:55:01,179 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:01,213 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:01,214 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): regionserver:33137-0x1017f7777180001, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:01,213 INFO [RS:0;jenkins-hbase20:33137] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33137,1684965252908; zookeeper connection closed. 2023-05-24 21:55:01,214 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7628688d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7628688d 2023-05-24 21:55:01,215 INFO [Listener at localhost.localdomain/41955] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-05-24 21:55:01,314 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:01,314 DEBUG [Listener at localhost.localdomain/42335-EventThread] zookeeper.ZKWatcher(600): master:38657-0x1017f7777180000, quorum=127.0.0.1:58580, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:01,314 INFO [M:0;jenkins-hbase20:38657] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38657,1684965252869; zookeeper connection closed. 2023-05-24 21:55:01,315 WARN [Listener at localhost.localdomain/41955] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:01,319 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:01,340 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 202e8ffd-9195-48d0-8926-e4e2dfd44817) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:55:01,341 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data3/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:01,341 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data4/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:01,425 WARN [Listener at localhost.localdomain/41955] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:01,429 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:01,531 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:01,531 WARN [BP-909052452-148.251.75.209-1684965252385 heartbeating to localhost.localdomain/127.0.0.1:43361] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-909052452-148.251.75.209-1684965252385 (Datanode Uuid 81703d63-edb3-45f9-a49d-b750f462f054) service to localhost.localdomain/127.0.0.1:43361 2023-05-24 21:55:01,532 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data9/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:01,532 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/cluster_d6998843-d292-0620-89cb-bce4821ea758/dfs/data/data10/current/BP-909052452-148.251.75.209-1684965252385] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:01,547 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:55:01,664 INFO [Listener at localhost.localdomain/41955] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:55:01,711 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:55:01,723 INFO [Listener at localhost.localdomain/41955] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnDatanodeDeath Thread=74 (was 51) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/41955 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-17-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:43361 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Abort regionserver monitor java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-6-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:43361 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-6-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:43361 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-17-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:43361 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-6-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:43361 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-15-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-17-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase20:0.leaseChecker java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:82) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:43361 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=457 (was 432) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=123 (was 138), ProcessCount=168 (was 168), AvailableMemoryMB=10205 (was 9952) - AvailableMemoryMB LEAK? - 2023-05-24 21:55:01,732 INFO [Listener at localhost.localdomain/41955] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=74, OpenFileDescriptor=457, MaxFileDescriptor=60000, SystemLoadAverage=123, ProcessCount=168, AvailableMemoryMB=10204 2023-05-24 21:55:01,733 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:55:01,733 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/hadoop.log.dir so I do NOT create it in target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95 2023-05-24 21:55:01,733 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/d5a4ae9a-9cce-1c00-c79f-c0b349bae843/hadoop.tmp.dir so I do NOT create it in target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95 2023-05-24 21:55:01,733 INFO [Listener at localhost.localdomain/41955] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4, deleteOnExit=true 2023-05-24 21:55:01,733 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/test.cache.data in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:55:01,734 DEBUG [Listener at localhost.localdomain/41955] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:55:01,734 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:55:01,735 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:55:01,736 INFO [Listener at localhost.localdomain/41955] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:55:01,737 WARN [Listener at localhost.localdomain/41955] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:55:01,738 WARN [Listener at localhost.localdomain/41955] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:55:01,738 WARN [Listener at localhost.localdomain/41955] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:55:01,776 WARN [Listener at localhost.localdomain/41955] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:01,778 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:01,784 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_localdomain_43991_hdfs____a2sqvg/webapp 2023-05-24 21:55:01,863 INFO [Listener at localhost.localdomain/41955] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43991 2023-05-24 21:55:01,865 WARN [Listener at localhost.localdomain/41955] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:55:01,866 WARN [Listener at localhost.localdomain/41955] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:55:01,866 WARN [Listener at localhost.localdomain/41955] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:55:01,895 WARN [Listener at localhost.localdomain/36975] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:01,910 WARN [Listener at localhost.localdomain/36975] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:01,912 WARN [Listener at localhost.localdomain/36975] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:01,914 INFO [Listener at localhost.localdomain/36975] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:01,919 INFO [Listener at localhost.localdomain/36975] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_42579_datanode____iprbi8/webapp 2023-05-24 21:55:02,011 INFO [Listener at localhost.localdomain/36975] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42579 2023-05-24 21:55:02,018 WARN [Listener at localhost.localdomain/42991] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:02,042 WARN [Listener at localhost.localdomain/42991] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:02,045 WARN [Listener at localhost.localdomain/42991] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:02,046 INFO [Listener at localhost.localdomain/42991] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:02,057 INFO [Listener at localhost.localdomain/42991] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_34171_datanode____.afiwcv/webapp 2023-05-24 21:55:02,099 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x77372f1e53642d57: Processing first storage report for DS-7a01207b-a12f-4de2-af98-634f7571de3c from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:02,099 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x77372f1e53642d57: from storage DS-7a01207b-a12f-4de2-af98-634f7571de3c node DatanodeRegistration(127.0.0.1:41905, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=41561, infoSecurePort=0, ipcPort=42991, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:02,099 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x77372f1e53642d57: Processing first storage report for DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:02,099 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x77372f1e53642d57: from storage DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b node DatanodeRegistration(127.0.0.1:41905, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=41561, infoSecurePort=0, ipcPort=42991, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:02,146 INFO [Listener at localhost.localdomain/42991] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34171 2023-05-24 21:55:02,153 WARN [Listener at localhost.localdomain/38795] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:02,225 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:02,235 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac095fc91c754a84: Processing first storage report for DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:02,235 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac095fc91c754a84: from storage DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 node DatanodeRegistration(127.0.0.1:45913, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=45741, infoSecurePort=0, ipcPort=38795, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:02,236 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xac095fc91c754a84: Processing first storage report for DS-6cb015cb-0892-42c1-b0e8-66993b712d06 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:02,236 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xac095fc91c754a84: from storage DS-6cb015cb-0892-42c1-b0e8-66993b712d06 node DatanodeRegistration(127.0.0.1:45913, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=45741, infoSecurePort=0, ipcPort=38795, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:02,262 DEBUG [Listener at localhost.localdomain/38795] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95 2023-05-24 21:55:02,264 INFO [Listener at localhost.localdomain/38795] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/zookeeper_0, clientPort=51259, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:55:02,267 INFO [Listener at localhost.localdomain/38795] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51259 2023-05-24 21:55:02,268 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,269 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,289 INFO [Listener at localhost.localdomain/38795] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda with version=8 2023-05-24 21:55:02,290 INFO [Listener at localhost.localdomain/38795] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:55:02,292 INFO [Listener at localhost.localdomain/38795] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:55:02,292 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,292 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,293 INFO [Listener at localhost.localdomain/38795] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:55:02,293 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,293 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:55:02,293 INFO [Listener at localhost.localdomain/38795] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:55:02,295 INFO [Listener at localhost.localdomain/38795] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44937 2023-05-24 21:55:02,296 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,297 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,299 INFO [Listener at localhost.localdomain/38795] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44937 connecting to ZooKeeper ensemble=127.0.0.1:51259 2023-05-24 21:55:02,310 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:449370x0, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:55:02,315 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44937-0x1017f7838230000 connected 2023-05-24 21:55:02,399 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:02,401 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:02,401 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:55:02,404 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44937 2023-05-24 21:55:02,404 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44937 2023-05-24 21:55:02,405 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44937 2023-05-24 21:55:02,405 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44937 2023-05-24 21:55:02,406 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44937 2023-05-24 21:55:02,406 INFO [Listener at localhost.localdomain/38795] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda, hbase.cluster.distributed=false 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:02,422 INFO [Listener at localhost.localdomain/38795] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:55:02,423 INFO [Listener at localhost.localdomain/38795] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:55:02,424 INFO [Listener at localhost.localdomain/38795] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43003 2023-05-24 21:55:02,424 INFO [Listener at localhost.localdomain/38795] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:55:02,425 DEBUG [Listener at localhost.localdomain/38795] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:55:02,426 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,427 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,428 INFO [Listener at localhost.localdomain/38795] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43003 connecting to ZooKeeper ensemble=127.0.0.1:51259 2023-05-24 21:55:02,433 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:430030x0, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:55:02,434 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): regionserver:430030x0, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:02,434 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43003-0x1017f7838230001 connected 2023-05-24 21:55:02,435 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:02,435 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ZKUtil(164): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:55:02,438 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43003 2023-05-24 21:55:02,438 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43003 2023-05-24 21:55:02,442 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43003 2023-05-24 21:55:02,443 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43003 2023-05-24 21:55:02,443 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43003 2023-05-24 21:55:02,444 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,446 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:55:02,446 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,447 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:02,447 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:02,447 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,448 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:55:02,449 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:55:02,449 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44937,1684965302291 from backup master directory 2023-05-24 21:55:02,450 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,450 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:55:02,450 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:55:02,450 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,470 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/hbase.id with ID: 93659e3d-8057-42e7-ae4e-3a364355440a 2023-05-24 21:55:02,485 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:02,487 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,495 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3af1e826 to 127.0.0.1:51259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:02,501 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59006f3c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:02,501 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:02,502 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:55:02,502 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:02,504 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store-tmp 2023-05-24 21:55:02,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:02,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:55:02,518 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:02,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:02,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:55:02,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:02,518 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:02,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:02,519 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,522 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44937%2C1684965302291, suffix=, logDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291, archiveDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/oldWALs, maxLogs=10 2023-05-24 21:55:02,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965302522 2023-05-24 21:55:02,539 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] 2023-05-24 21:55:02,539 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:02,539 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:02,539 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,539 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,543 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,545 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:55:02,545 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:55:02,546 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:02,547 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,547 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,550 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:02,553 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:02,554 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=769894, jitterRate=-0.02103029191493988}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:02,554 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:02,554 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:55:02,555 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:55:02,555 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:55:02,555 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:55:02,556 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 21:55:02,556 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 21:55:02,556 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:55:02,567 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:55:02,568 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:55:02,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:55:02,578 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:55:02,579 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:55:02,579 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:55:02,580 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:55:02,581 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,581 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:55:02,581 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:55:02,582 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:55:02,583 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:02,583 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,583 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:02,583 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44937,1684965302291, sessionid=0x1017f7838230000, setting cluster-up flag (Was=false) 2023-05-24 21:55:02,592 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,595 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:55:02,596 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,599 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,602 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:55:02,603 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:02,604 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.hbase-snapshot/.tmp 2023-05-24 21:55:02,621 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:02,621 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:02,621 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:02,621 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:02,622 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:02,622 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:55:02,622 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,622 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:55:02,622 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,625 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965332625 2023-05-24 21:55:02,626 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:55:02,626 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:55:02,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:55:02,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:55:02,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:55:02,627 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:55:02,629 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,629 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:02,631 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:55:02,631 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:55:02,631 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:55:02,631 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:55:02,632 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:55:02,632 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:55:02,632 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:02,635 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965302632,5,FailOnTimeoutGroup] 2023-05-24 21:55:02,638 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965302636,5,FailOnTimeoutGroup] 2023-05-24 21:55:02,638 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,638 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:55:02,638 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,638 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,646 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(951): ClusterId : 93659e3d-8057-42e7-ae4e-3a364355440a 2023-05-24 21:55:02,646 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:55:02,648 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:55:02,648 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:55:02,660 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:55:02,662 DEBUG [RS:0;jenkins-hbase20:43003] zookeeper.ReadOnlyZKClient(139): Connect 0x187d6cd4 to 127.0.0.1:51259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:02,662 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:02,662 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:02,663 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda 2023-05-24 21:55:02,681 DEBUG [RS:0;jenkins-hbase20:43003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c42eb03, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:02,682 DEBUG [RS:0;jenkins-hbase20:43003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@594066ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:55:02,689 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:02,690 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:55:02,692 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/info 2023-05-24 21:55:02,692 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:55:02,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:02,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:55:02,694 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43003 2023-05-24 21:55:02,694 INFO [RS:0;jenkins-hbase20:43003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:55:02,694 INFO [RS:0;jenkins-hbase20:43003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:55:02,694 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:55:02,695 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44937,1684965302291 with isa=jenkins-hbase20.apache.org/148.251.75.209:43003, startcode=1684965302421 2023-05-24 21:55:02,695 DEBUG [RS:0;jenkins-hbase20:43003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:55:02,695 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:02,696 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:55:02,697 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:02,698 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:55:02,701 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43105, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:55:02,701 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/table 2023-05-24 21:55:02,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,702 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:55:02,703 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda 2023-05-24 21:55:02,703 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36975 2023-05-24 21:55:02,703 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:55:02,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:02,704 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:02,704 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740 2023-05-24 21:55:02,705 DEBUG [RS:0;jenkins-hbase20:43003] zookeeper.ZKUtil(162): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,705 WARN [RS:0;jenkins-hbase20:43003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:55:02,705 INFO [RS:0;jenkins-hbase20:43003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:02,705 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740 2023-05-24 21:55:02,705 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,705 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43003,1684965302421] 2023-05-24 21:55:02,709 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:55:02,709 DEBUG [RS:0;jenkins-hbase20:43003] zookeeper.ZKUtil(162): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,710 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:55:02,710 INFO [RS:0;jenkins-hbase20:43003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:55:02,710 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:55:02,714 INFO [RS:0;jenkins-hbase20:43003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:55:02,715 INFO [RS:0;jenkins-hbase20:43003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:55:02,716 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:02,716 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,716 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:55:02,717 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=767124, jitterRate=-0.02455255389213562}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:55:02,717 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:55:02,717 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:02,717 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:02,717 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:02,717 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,717 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:02,718 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,718 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:55:02,718 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:55:02,718 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,719 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,719 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,719 DEBUG [RS:0;jenkins-hbase20:43003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:02,719 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:02,719 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:55:02,720 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:55:02,723 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,723 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,723 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,724 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:55:02,725 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:55:02,735 INFO [RS:0;jenkins-hbase20:43003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:55:02,735 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43003,1684965302421-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:02,744 INFO [RS:0;jenkins-hbase20:43003] regionserver.Replication(203): jenkins-hbase20.apache.org,43003,1684965302421 started 2023-05-24 21:55:02,744 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43003,1684965302421, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43003, sessionid=0x1017f7838230001 2023-05-24 21:55:02,744 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:55:02,744 DEBUG [RS:0;jenkins-hbase20:43003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,744 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43003,1684965302421' 2023-05-24 21:55:02,744 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43003,1684965302421' 2023-05-24 21:55:02,745 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:55:02,746 DEBUG [RS:0;jenkins-hbase20:43003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:55:02,746 DEBUG [RS:0;jenkins-hbase20:43003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:55:02,746 INFO [RS:0;jenkins-hbase20:43003] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:55:02,746 INFO [RS:0;jenkins-hbase20:43003] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:55:02,849 INFO [RS:0;jenkins-hbase20:43003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43003%2C1684965302421, suffix=, logDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421, archiveDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/oldWALs, maxLogs=32 2023-05-24 21:55:02,871 INFO [RS:0;jenkins-hbase20:43003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 2023-05-24 21:55:02,871 DEBUG [RS:0;jenkins-hbase20:43003] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] 2023-05-24 21:55:02,875 DEBUG [jenkins-hbase20:44937] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:55:02,876 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43003,1684965302421, state=OPENING 2023-05-24 21:55:02,877 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:55:02,878 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:02,879 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43003,1684965302421}] 2023-05-24 21:55:02,879 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:55:03,033 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:03,033 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:55:03,036 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:55:03,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:55:03,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:03,042 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43003%2C1684965302421.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421, archiveDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/oldWALs, maxLogs=32 2023-05-24 21:55:03,054 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.meta.1684965303046.meta 2023-05-24 21:55:03,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] 2023-05-24 21:55:03,054 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:03,055 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:55:03,055 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:55:03,055 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:55:03,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:55:03,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:03,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:55:03,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:55:03,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:55:03,059 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/info 2023-05-24 21:55:03,059 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/info 2023-05-24 21:55:03,059 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:55:03,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:03,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:55:03,061 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:03,062 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:03,062 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:55:03,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:03,063 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:55:03,064 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/table 2023-05-24 21:55:03,064 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740/table 2023-05-24 21:55:03,065 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:55:03,065 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:03,066 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740 2023-05-24 21:55:03,068 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/meta/1588230740 2023-05-24 21:55:03,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:55:03,071 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:55:03,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=719092, jitterRate=-0.08562730252742767}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:55:03,072 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:55:03,074 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965303033 2023-05-24 21:55:03,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:55:03,079 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:55:03,080 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43003,1684965302421, state=OPEN 2023-05-24 21:55:03,081 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:55:03,081 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:55:03,083 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:55:03,084 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43003,1684965302421 in 203 msec 2023-05-24 21:55:03,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:55:03,086 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 364 msec 2023-05-24 21:55:03,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 469 msec 2023-05-24 21:55:03,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965303088, completionTime=-1 2023-05-24 21:55:03,089 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:55:03,089 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:55:03,091 DEBUG [hconnection-0x534811cb-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:55:03,094 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:55:03,096 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:55:03,096 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965363096 2023-05-24 21:55:03,096 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965423096 2023-05-24 21:55:03,096 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44937,1684965302291-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44937,1684965302291-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44937,1684965302291-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44937, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:03,106 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:55:03,107 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:03,109 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:55:03,112 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:55:03,114 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:55:03,115 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:55:03,117 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,117 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34 empty. 2023-05-24 21:55:03,118 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,118 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:55:03,145 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:03,146 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => d44ae157b28d1a92c545df81ddcfec34, NAME => 'hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp 2023-05-24 21:55:03,164 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:03,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing d44ae157b28d1a92c545df81ddcfec34, disabling compactions & flushes 2023-05-24 21:55:03,165 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. after waiting 0 ms 2023-05-24 21:55:03,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,165 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,165 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for d44ae157b28d1a92c545df81ddcfec34: 2023-05-24 21:55:03,168 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:55:03,169 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965303168"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965303168"}]},"ts":"1684965303168"} 2023-05-24 21:55:03,171 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:55:03,172 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:55:03,172 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965303172"}]},"ts":"1684965303172"} 2023-05-24 21:55:03,174 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:55:03,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d44ae157b28d1a92c545df81ddcfec34, ASSIGN}] 2023-05-24 21:55:03,183 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d44ae157b28d1a92c545df81ddcfec34, ASSIGN 2023-05-24 21:55:03,184 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=d44ae157b28d1a92c545df81ddcfec34, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43003,1684965302421; forceNewPlan=false, retain=false 2023-05-24 21:55:03,335 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d44ae157b28d1a92c545df81ddcfec34, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:03,335 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965303335"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965303335"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965303335"}]},"ts":"1684965303335"} 2023-05-24 21:55:03,338 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure d44ae157b28d1a92c545df81ddcfec34, server=jenkins-hbase20.apache.org,43003,1684965302421}] 2023-05-24 21:55:03,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d44ae157b28d1a92c545df81ddcfec34, NAME => 'hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:03,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:03,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,496 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,497 INFO [StoreOpener-d44ae157b28d1a92c545df81ddcfec34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,499 DEBUG [StoreOpener-d44ae157b28d1a92c545df81ddcfec34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34/info 2023-05-24 21:55:03,499 DEBUG [StoreOpener-d44ae157b28d1a92c545df81ddcfec34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34/info 2023-05-24 21:55:03,499 INFO [StoreOpener-d44ae157b28d1a92c545df81ddcfec34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d44ae157b28d1a92c545df81ddcfec34 columnFamilyName info 2023-05-24 21:55:03,500 INFO [StoreOpener-d44ae157b28d1a92c545df81ddcfec34-1] regionserver.HStore(310): Store=d44ae157b28d1a92c545df81ddcfec34/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:03,500 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,503 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:03,505 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/hbase/namespace/d44ae157b28d1a92c545df81ddcfec34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:03,506 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d44ae157b28d1a92c545df81ddcfec34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=822142, jitterRate=0.04540817439556122}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:03,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d44ae157b28d1a92c545df81ddcfec34: 2023-05-24 21:55:03,508 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34., pid=6, masterSystemTime=1684965303490 2023-05-24 21:55:03,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,511 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:03,512 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d44ae157b28d1a92c545df81ddcfec34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:03,512 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965303512"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965303512"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965303512"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965303512"}]},"ts":"1684965303512"} 2023-05-24 21:55:03,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:55:03,518 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure d44ae157b28d1a92c545df81ddcfec34, server=jenkins-hbase20.apache.org,43003,1684965302421 in 177 msec 2023-05-24 21:55:03,520 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:55:03,521 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=d44ae157b28d1a92c545df81ddcfec34, ASSIGN in 338 msec 2023-05-24 21:55:03,521 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:55:03,522 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965303521"}]},"ts":"1684965303521"} 2023-05-24 21:55:03,523 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:55:03,526 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:55:03,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 419 msec 2023-05-24 21:55:03,610 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:55:03,612 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:03,612 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:03,616 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:55:03,626 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:03,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-05-24 21:55:03,638 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:55:03,648 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:03,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-05-24 21:55:03,664 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:55:03,665 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:55:03,665 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.215sec 2023-05-24 21:55:03,665 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:55:03,666 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:55:03,667 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:55:03,667 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44937,1684965302291-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:55:03,667 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44937,1684965302291-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:55:03,669 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:55:03,746 DEBUG [Listener at localhost.localdomain/38795] zookeeper.ReadOnlyZKClient(139): Connect 0x5ede761c to 127.0.0.1:51259 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:03,751 DEBUG [Listener at localhost.localdomain/38795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67fcaa16, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:03,753 DEBUG [hconnection-0x4e9ac161-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:55:03,755 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43074, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:55:03,757 INFO [Listener at localhost.localdomain/38795] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:03,757 INFO [Listener at localhost.localdomain/38795] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:03,760 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:55:03,760 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:03,761 INFO [Listener at localhost.localdomain/38795] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:55:03,761 INFO [Listener at localhost.localdomain/38795] wal.TestLogRolling(429): Starting testLogRollOnPipelineRestart 2023-05-24 21:55:03,761 INFO [Listener at localhost.localdomain/38795] wal.TestLogRolling(432): Replication=2 2023-05-24 21:55:03,762 DEBUG [Listener at localhost.localdomain/38795] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 21:55:03,767 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46982, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 21:55:03,769 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 21:55:03,769 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 21:55:03,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:03,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart 2023-05-24 21:55:03,774 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:55:03,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRollOnPipelineRestart" procId is: 9 2023-05-24 21:55:03,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:55:03,775 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:55:03,777 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:03,778 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf empty. 2023-05-24 21:55:03,778 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:03,778 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRollOnPipelineRestart regions 2023-05-24 21:55:03,792 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp/data/default/TestLogRolling-testLogRollOnPipelineRestart/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:03,793 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(7675): creating {ENCODED => 66ba05d27f73fad98f00ea988b0205bf, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRollOnPipelineRestart', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/.tmp 2023-05-24 21:55:03,803 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:03,803 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1604): Closing 66ba05d27f73fad98f00ea988b0205bf, disabling compactions & flushes 2023-05-24 21:55:03,804 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:03,804 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:03,804 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. after waiting 0 ms 2023-05-24 21:55:03,804 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:03,804 INFO [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:03,804 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRollOnPipelineRestart-pool-0] regionserver.HRegion(1558): Region close journal for 66ba05d27f73fad98f00ea988b0205bf: 2023-05-24 21:55:03,807 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:55:03,808 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684965303808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965303808"}]},"ts":"1684965303808"} 2023-05-24 21:55:03,810 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:55:03,811 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:55:03,812 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965303811"}]},"ts":"1684965303811"} 2023-05-24 21:55:03,813 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLING in hbase:meta 2023-05-24 21:55:03,816 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=66ba05d27f73fad98f00ea988b0205bf, ASSIGN}] 2023-05-24 21:55:03,818 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=66ba05d27f73fad98f00ea988b0205bf, ASSIGN 2023-05-24 21:55:03,819 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=66ba05d27f73fad98f00ea988b0205bf, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43003,1684965302421; forceNewPlan=false, retain=false 2023-05-24 21:55:03,970 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=66ba05d27f73fad98f00ea988b0205bf, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:03,971 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684965303970"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965303970"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965303970"}]},"ts":"1684965303970"} 2023-05-24 21:55:03,973 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 66ba05d27f73fad98f00ea988b0205bf, server=jenkins-hbase20.apache.org,43003,1684965302421}] 2023-05-24 21:55:04,130 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:04,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 66ba05d27f73fad98f00ea988b0205bf, NAME => 'TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:04,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRollOnPipelineRestart 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:04,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,132 INFO [StoreOpener-66ba05d27f73fad98f00ea988b0205bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,134 DEBUG [StoreOpener-66ba05d27f73fad98f00ea988b0205bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf/info 2023-05-24 21:55:04,134 DEBUG [StoreOpener-66ba05d27f73fad98f00ea988b0205bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf/info 2023-05-24 21:55:04,134 INFO [StoreOpener-66ba05d27f73fad98f00ea988b0205bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 66ba05d27f73fad98f00ea988b0205bf columnFamilyName info 2023-05-24 21:55:04,135 INFO [StoreOpener-66ba05d27f73fad98f00ea988b0205bf-1] regionserver.HStore(310): Store=66ba05d27f73fad98f00ea988b0205bf/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:04,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:04,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/data/default/TestLogRolling-testLogRollOnPipelineRestart/66ba05d27f73fad98f00ea988b0205bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:04,141 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 66ba05d27f73fad98f00ea988b0205bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=838812, jitterRate=0.06660501658916473}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:04,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 66ba05d27f73fad98f00ea988b0205bf: 2023-05-24 21:55:04,142 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf., pid=11, masterSystemTime=1684965304126 2023-05-24 21:55:04,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:04,145 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:04,146 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=66ba05d27f73fad98f00ea988b0205bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:04,146 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf.","families":{"info":[{"qualifier":"regioninfo","vlen":77,"tag":[],"timestamp":"1684965304146"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965304146"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965304146"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965304146"}]},"ts":"1684965304146"} 2023-05-24 21:55:04,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 21:55:04,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 66ba05d27f73fad98f00ea988b0205bf, server=jenkins-hbase20.apache.org,43003,1684965302421 in 175 msec 2023-05-24 21:55:04,152 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 21:55:04,153 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRollOnPipelineRestart, region=66ba05d27f73fad98f00ea988b0205bf, ASSIGN in 334 msec 2023-05-24 21:55:04,153 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:55:04,153 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRollOnPipelineRestart","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965304153"}]},"ts":"1684965304153"} 2023-05-24 21:55:04,155 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRollOnPipelineRestart, state=ENABLED in hbase:meta 2023-05-24 21:55:04,157 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:55:04,158 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRollOnPipelineRestart in 387 msec 2023-05-24 21:55:06,254 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:55:08,710 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 21:55:08,712 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRollOnPipelineRestart' 2023-05-24 21:55:13,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:55:13,777 INFO [Listener at localhost.localdomain/38795] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRollOnPipelineRestart, procId: 9 completed 2023-05-24 21:55:13,779 DEBUG [Listener at localhost.localdomain/38795] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRollOnPipelineRestart 2023-05-24 21:55:13,779 DEBUG [Listener at localhost.localdomain/38795] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:15,786 INFO [Listener at localhost.localdomain/38795] wal.TestLogRolling(469): log.getCurrentFileName()): hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 2023-05-24 21:55:15,787 WARN [Listener at localhost.localdomain/38795] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:15,789 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:15,789 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008] hdfs.DataStreamer(1548): Error Recovery for BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008 in pipeline [DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]) is bad. 2023-05-24 21:55:15,791 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005 java.io.IOException: Bad response ERROR for BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005 from datanode DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 21:55:15,792 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009 java.io.IOException: Bad response ERROR for BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009 from datanode DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 21:55:15,792 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.meta.1684965303046.meta block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009] hdfs.DataStreamer(1548): Error Recovery for BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]) is bad. 2023-05-24 21:55:15,792 WARN [PacketResponder: BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45913]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,803 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965302522 block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005] hdfs.DataStreamer(1548): Error Recovery for BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005 in pipeline [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:45913,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]) is bad. 2023-05-24 21:55:15,805 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:34086 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34086 dst: /127.0.0.1:41905 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:406) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,805 WARN [PacketResponder: BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:45913]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,817 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1701532455_17 at /127.0.0.1:34050 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34050 dst: /127.0.0.1:41905 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,833 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:34084 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:34084 dst: /127.0.0.1:41905 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:41905 remote=/127.0.0.1:34084]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,834 WARN [PacketResponder: BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:41905]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.io.IOException: The stream is closed at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,835 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:50388 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:45913:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50388 dst: /127.0.0.1:45913 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,835 INFO [Listener at localhost.localdomain/38795] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:15,839 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:15,839 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 5c148da9-1ff3-4199-a124-1d8cb974f698) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:15,839 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1701532455_17 at /127.0.0.1:50366 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:45913:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50366 dst: /127.0.0.1:45913 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,839 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:50390 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:45913:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:50390 dst: /127.0.0.1:45913 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:15,840 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data3/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:15,841 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data4/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:15,848 WARN [Listener at localhost.localdomain/38795] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:15,851 WARN [Listener at localhost.localdomain/38795] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:15,852 INFO [Listener at localhost.localdomain/38795] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:15,859 INFO [Listener at localhost.localdomain/38795] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_38477_datanode____1h7cvq/webapp 2023-05-24 21:55:15,945 INFO [Listener at localhost.localdomain/38795] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38477 2023-05-24 21:55:15,954 WARN [Listener at localhost.localdomain/45833] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:15,966 WARN [Listener at localhost.localdomain/45833] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:15,966 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1014] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1014 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:15,966 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1015] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1015 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:15,970 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1016] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1016 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:15,976 INFO [Listener at localhost.localdomain/45833] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:16,036 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87498ed49338e5a9: Processing first storage report for DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:16,036 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87498ed49338e5a9: from storage DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 node DatanodeRegistration(127.0.0.1:38765, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=33503, infoSecurePort=0, ipcPort=45833, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:16,037 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87498ed49338e5a9: Processing first storage report for DS-6cb015cb-0892-42c1-b0e8-66993b712d06 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:16,037 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87498ed49338e5a9: from storage DS-6cb015cb-0892-42c1-b0e8-66993b712d06 node DatanodeRegistration(127.0.0.1:38765, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=33503, infoSecurePort=0, ipcPort=45833, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:16,079 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:16,080 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1701532455_17 at /127.0.0.1:58094 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741829_1005]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58094 dst: /127.0.0.1:41905 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:16,080 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:58110 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741833_1009]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58110 dst: /127.0.0.1:41905 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:16,080 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:16,080 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:58092 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1008]] datanode.DataXceiver(323): 127.0.0.1:41905:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:58092 dst: /127.0.0.1:41905 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:16,083 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:16,083 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data2/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:16,090 WARN [Listener at localhost.localdomain/45833] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:16,093 WARN [Listener at localhost.localdomain/45833] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:16,094 INFO [Listener at localhost.localdomain/45833] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:16,100 INFO [Listener at localhost.localdomain/45833] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_46787_datanode____.yvp9id/webapp 2023-05-24 21:55:16,190 INFO [Listener at localhost.localdomain/45833] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46787 2023-05-24 21:55:16,198 WARN [Listener at localhost.localdomain/44777] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:16,288 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76cb7be1b6959349: Processing first storage report for DS-7a01207b-a12f-4de2-af98-634f7571de3c from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:16,288 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76cb7be1b6959349: from storage DS-7a01207b-a12f-4de2-af98-634f7571de3c node DatanodeRegistration(127.0.0.1:35537, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=40235, infoSecurePort=0, ipcPort=44777, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 7, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:16,288 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76cb7be1b6959349: Processing first storage report for DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:16,289 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76cb7be1b6959349: from storage DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b node DatanodeRegistration(127.0.0.1:35537, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=40235, infoSecurePort=0, ipcPort=44777, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:17,206 INFO [Listener at localhost.localdomain/44777] wal.TestLogRolling(481): Data Nodes restarted 2023-05-24 21:55:17,210 INFO [Listener at localhost.localdomain/44777] wal.AbstractTestLogRolling(233): Validated row row1002 2023-05-24 21:55:17,212 WARN [RS:0;jenkins-hbase20:43003.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=5, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:17,214 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43003%2C1684965302421:(num 1684965302855) roll requested 2023-05-24 21:55:17,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43003] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:17,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43003] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:43074 deadline: 1684965327211, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-24 21:55:17,226 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 newFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 2023-05-24 21:55:17,226 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=5, requesting roll of WAL 2023-05-24 21:55:17,227 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 with entries=5, filesize=2.11 KB; new WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 2023-05-24 21:55:17,227 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:35537,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] 2023-05-24 21:55:17,227 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 is not closed yet, will try archiving it next time 2023-05-24 21:55:17,227 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:17,227 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:29,257 INFO [Listener at localhost.localdomain/44777] wal.AbstractTestLogRolling(233): Validated row row1003 2023-05-24 21:55:31,261 WARN [Listener at localhost.localdomain/44777] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:31,266 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017 java.io.IOException: Bad response ERROR for BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017 from datanode DatanodeInfoWithStorage[127.0.0.1:35537,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK] at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1120) 2023-05-24 21:55:31,266 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017] hdfs.DataStreamer(1548): Error Recovery for BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017 in pipeline [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:35537,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]]: datanode 1(DatanodeInfoWithStorage[127.0.0.1:35537,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]) is bad. 2023-05-24 21:55:31,267 WARN [PacketResponder: BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:35537]] datanode.BlockReceiver$PacketResponder(1486): IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:477) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1630) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1565) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1478) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:31,269 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:48184 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38765:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:48184 dst: /127.0.0.1:38765 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:31,273 INFO [Listener at localhost.localdomain/44777] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:31,287 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:31,287 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:31,288 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data2/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:31,377 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:54348 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:35537:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:54348 dst: /127.0.0.1:35537 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:31,390 WARN [Listener at localhost.localdomain/44777] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:31,393 WARN [Listener at localhost.localdomain/44777] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:31,394 INFO [Listener at localhost.localdomain/44777] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:31,401 INFO [Listener at localhost.localdomain/44777] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_36673_datanode____glv1yu/webapp 2023-05-24 21:55:31,476 INFO [Listener at localhost.localdomain/44777] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36673 2023-05-24 21:55:31,486 WARN [Listener at localhost.localdomain/33657] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:31,492 WARN [Listener at localhost.localdomain/33657] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:31,492 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1018] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1018 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:31,495 INFO [Listener at localhost.localdomain/33657] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:31,544 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa57c155f9954b310: Processing first storage report for DS-7a01207b-a12f-4de2-af98-634f7571de3c from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:31,545 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa57c155f9954b310: from storage DS-7a01207b-a12f-4de2-af98-634f7571de3c node DatanodeRegistration(127.0.0.1:36631, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=37965, infoSecurePort=0, ipcPort=33657, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:31,545 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa57c155f9954b310: Processing first storage report for DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b from datanode 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7 2023-05-24 21:55:31,545 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa57c155f9954b310: from storage DS-0522ed48-7cd4-4905-ac6e-a8a9f4eaf81b node DatanodeRegistration(127.0.0.1:36631, datanodeUuid=68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7, infoPort=37965, infoSecurePort=0, ipcPort=33657, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 6, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:31,600 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_963678403_17 at /127.0.0.1:39060 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1017]] datanode.DataXceiver(323): 127.0.0.1:38765:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:39060 dst: /127.0.0.1:38765 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:31,602 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:31,602 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 5c148da9-1ff3-4199-a124-1d8cb974f698) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:31,603 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data3/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:31,604 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data4/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:31,612 WARN [Listener at localhost.localdomain/33657] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:31,615 WARN [Listener at localhost.localdomain/33657] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:31,617 INFO [Listener at localhost.localdomain/33657] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:31,623 INFO [Listener at localhost.localdomain/33657] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/java.io.tmpdir/Jetty_localhost_38005_datanode____.u0ohnt/webapp 2023-05-24 21:55:31,702 INFO [Listener at localhost.localdomain/33657] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38005 2023-05-24 21:55:31,709 WARN [Listener at localhost.localdomain/33255] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:31,782 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x914a557f02c95fd1: Processing first storage report for DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:31,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x914a557f02c95fd1: from storage DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350 node DatanodeRegistration(127.0.0.1:37033, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=46355, infoSecurePort=0, ipcPort=33255, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 6, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:55:31,782 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x914a557f02c95fd1: Processing first storage report for DS-6cb015cb-0892-42c1-b0e8-66993b712d06 from datanode 5c148da9-1ff3-4199-a124-1d8cb974f698 2023-05-24 21:55:31,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x914a557f02c95fd1: from storage DS-6cb015cb-0892-42c1-b0e8-66993b712d06 node DatanodeRegistration(127.0.0.1:37033, datanodeUuid=5c148da9-1ff3-4199-a124-1d8cb974f698, infoPort=46355, infoSecurePort=0, ipcPort=33255, storageInfo=lv=-57;cid=testClusterID;nsid=277563555;c=1684965301740), blocks: 8, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:32,628 WARN [master/jenkins-hbase20:0:becomeActiveMaster.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=91, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,628 DEBUG [master:store-WAL-Roller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C44937%2C1684965302291:(num 1684965302522) roll requested 2023-05-24 21:55:32,628 ERROR [ProcExecTimeout] helpers.MarkerIgnoringBase(151): Failed to delete pids=[4, 7, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,629 ERROR [ProcExecTimeout] procedure2.TimeoutExecutorThread(124): Ignoring pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner exception: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL java.io.UncheckedIOException: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.procedure2.store.region.RegionProcedureStore.delete(RegionProcedureStore.java:423) at org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner.periodicExecute(CompletedProcedureCleaner.java:135) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.executeInMemoryChore(TimeoutExecutorThread.java:122) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.execDelayedProcedure(TimeoutExecutorThread.java:101) at org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:68) Caused by: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,641 WARN [master:store-WAL-Roller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=91, requesting roll of WAL 2023-05-24 21:55:32,641 INFO [master:store-WAL-Roller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965302522 with entries=88, filesize=43.81 KB; new WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965332628 2023-05-24 21:55:32,641 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37033,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:36631,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] 2023-05-24 21:55:32,641 DEBUG [master:store-WAL-Roller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965302522 is not closed yet, will try archiving it next time 2023-05-24 21:55:32,641 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,641 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291/jenkins-hbase20.apache.org%2C44937%2C1684965302291.1684965302522; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,714 INFO [Listener at localhost.localdomain/33255] wal.TestLogRolling(498): Data Nodes restarted 2023-05-24 21:55:32,716 INFO [Listener at localhost.localdomain/33255] wal.AbstractTestLogRolling(233): Validated row row1004 2023-05-24 21:55:32,717 WARN [RS:0;jenkins-hbase20:43003.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=8, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,717 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43003%2C1684965302421:(num 1684965317214) roll requested 2023-05-24 21:55:32,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43003] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43003] ipc.CallRunner(144): callId: 18 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:43074 deadline: 1684965342716, exception=org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-24 21:55:32,728 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 newFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 2023-05-24 21:55:32,728 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8, requesting roll of WAL 2023-05-24 21:55:32,728 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 with entries=2, filesize=2.37 KB; new WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 2023-05-24 21:55:32,728 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36631,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK], DatanodeInfoWithStorage[127.0.0.1:37033,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] 2023-05-24 21:55:32,728 WARN [Close-WAL-Writer-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:32,728 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 is not closed yet, will try archiving it next time 2023-05-24 21:55:32,729 WARN [Close-WAL-Writer-0] wal.FSHLog(466): Riding over failed WAL close of hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214; THIS FILE WAS NOT CLOSED BUT ALL EDITS SYNCED SO SHOULD BE OK java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:38765,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:44,780 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 newFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 2023-05-24 21:55:44,782 INFO [Listener at localhost.localdomain/33255] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 with entries=1, filesize=1.22 KB; new WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 2023-05-24 21:55:44,786 DEBUG [Listener at localhost.localdomain/33255] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37033,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:36631,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] 2023-05-24 21:55:44,786 DEBUG [Listener at localhost.localdomain/33255] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 is not closed yet, will try archiving it next time 2023-05-24 21:55:44,786 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 2023-05-24 21:55:44,787 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 2023-05-24 21:55:44,790 WARN [IPC Server handler 2 on default port 36975] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 has not been closed. Lease recovery is in progress. RecoveryId = 1022 for block blk_1073741832_1014 2023-05-24 21:55:44,793 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 after 6ms 2023-05-24 21:55:45,813 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@60c93925] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1287258583-148.251.75.209-1684965301740:blk_1073741832_1014, datanode=DatanodeInfoWithStorage[127.0.0.1:37033,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741832_1014, replica=ReplicaWaitingToBeRecovered, blk_1073741832_1008, RWR getNumBytes() = 2162 getBytesOnDisk() = 2162 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data4/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data4/current/BP-1287258583-148.251.75.209-1684965301740/current/rbw/blk_1073741832 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:48,794 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 after 4007ms 2023-05-24 21:55:48,794 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965302855 2023-05-24 21:55:48,805 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684965303506/Put/vlen=176/seqid=0] 2023-05-24 21:55:48,805 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #4: [default/info:d/1684965303622/Put/vlen=9/seqid=0] 2023-05-24 21:55:48,805 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #5: [hbase/info:d/1684965303645/Put/vlen=7/seqid=0] 2023-05-24 21:55:48,806 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #3: [\x00/METAFAMILY:HBASE::REGION_EVENT::REGION_OPEN/1684965304141/Put/vlen=232/seqid=0] 2023-05-24 21:55:48,806 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #4: [row1002/info:/1684965313784/Put/vlen=1045/seqid=0] 2023-05-24 21:55:48,806 DEBUG [Listener at localhost.localdomain/33255] wal.ProtobufLogReader(420): EOF at position 2162 2023-05-24 21:55:48,806 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 2023-05-24 21:55:48,806 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 2023-05-24 21:55:48,807 WARN [IPC Server handler 4 on default port 36975] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 has not been closed. Lease recovery is in progress. RecoveryId = 1023 for block blk_1073741838_1018 2023-05-24 21:55:48,807 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 after 1ms 2023-05-24 21:55:49,794 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@69f8c529] datanode.BlockRecoveryWorker$RecoveryTaskContiguous(155): Failed to recover block (block=BP-1287258583-148.251.75.209-1684965301740:blk_1073741838_1018, datanode=DatanodeInfoWithStorage[127.0.0.1:36631,null,null]) java.io.IOException: replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current/BP-1287258583-148.251.75.209-1684965301740/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:348) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.access$300(BlockRecoveryWorker.java:46) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:120) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:383) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): replica.getGenerationStamp() < block.getGenerationStamp(), block=blk_1073741838_1018, replica=ReplicaWaitingToBeRecovered, blk_1073741838_1017, RWR getNumBytes() = 2425 getBytesOnDisk() = 2425 getVisibleLength()= -1 getVolume() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current getBlockFile() = /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current/BP-1287258583-148.251.75.209-1684965301740/current/rbw/blk_1073741838 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecoveryImpl(FsDatasetImpl.java:2694) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2655) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2644) at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2835) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy43.initReplicaRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.initReplicaRecovery(InterDatanodeProtocolTranslatorPB.java:83) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker.callInitReplicaRecovery(BlockRecoveryWorker.java:346) ... 4 more 2023-05-24 21:55:52,808 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 after 4002ms 2023-05-24 21:55:52,808 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965317214 2023-05-24 21:55:52,813 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #6: [row1003/info:/1684965327251/Put/vlen=1045/seqid=0] 2023-05-24 21:55:52,813 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #7: [row1004/info:/1684965329258/Put/vlen=1045/seqid=0] 2023-05-24 21:55:52,813 DEBUG [Listener at localhost.localdomain/33255] wal.ProtobufLogReader(420): EOF at position 2425 2023-05-24 21:55:52,813 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 2023-05-24 21:55:52,813 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 2023-05-24 21:55:52,814 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 after 1ms 2023-05-24 21:55:52,814 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965332717 2023-05-24 21:55:52,819 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(522): #9: [row1005/info:/1684965342766/Put/vlen=1045/seqid=0] 2023-05-24 21:55:52,819 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(512): recovering lease for hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 2023-05-24 21:55:52,819 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 2023-05-24 21:55:52,819 WARN [IPC Server handler 1 on default port 36975] namenode.FSNamesystem(3291): DIR* NameSystem.internalReleaseLease: File /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 has not been closed. Lease recovery is in progress. RecoveryId = 1024 for block blk_1073741841_1021 2023-05-24 21:55:52,820 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Failed to recover lease, attempt=0 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 after 1ms 2023-05-24 21:55:53,788 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1701532455_17 at /127.0.0.1:52436 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:37033:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:52436 dst: /127.0.0.1:37033 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=localhost/127.0.0.1:37033 remote=/127.0.0.1:52436]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:53,791 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_1701532455_17 at /127.0.0.1:47492 [Receiving block BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021]] datanode.DataXceiver(323): 127.0.0.1:36631:DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:47492 dst: /127.0.0.1:36631 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:53,790 WARN [ResponseProcessor for block BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021] hdfs.DataStreamer$ResponseProcessor(1190): Exception for BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:456) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1080) 2023-05-24 21:55:53,793 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 block BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021] hdfs.DataStreamer(1548): Error Recovery for BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 in pipeline [DatanodeInfoWithStorage[127.0.0.1:37033,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK], DatanodeInfoWithStorage[127.0.0.1:36631,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]]: datanode 0(DatanodeInfoWithStorage[127.0.0.1:37033,DS-7f0b402b-3ecf-4dbd-a575-b94379ccf350,DISK]) is bad. 2023-05-24 21:55:53,803 WARN [DataStreamer for file /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 block BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021] hdfs.DataStreamer(823): DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,821 INFO [Listener at localhost.localdomain/33255] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=1 on file=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 after 4002ms 2023-05-24 21:55:56,821 DEBUG [Listener at localhost.localdomain/33255] wal.TestLogRolling(516): Reading WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 2023-05-24 21:55:56,828 DEBUG [Listener at localhost.localdomain/33255] wal.ProtobufLogReader(420): EOF at position 83 2023-05-24 21:55:56,829 INFO [Listener at localhost.localdomain/33255] regionserver.HRegion(2745): Flushing 66ba05d27f73fad98f00ea988b0205bf 1/1 column families, dataSize=4.20 KB heapSize=4.75 KB 2023-05-24 21:55:56,831 WARN [RS:0;jenkins-hbase20:43003.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=11, requesting roll of WAL org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,832 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43003%2C1684965302421:(num 1684965344772) roll requested 2023-05-24 21:55:56,832 DEBUG [Listener at localhost.localdomain/33255] regionserver.HRegion(2446): Flush status journal for 66ba05d27f73fad98f00ea988b0205bf: 2023-05-24 21:55:56,832 INFO [Listener at localhost.localdomain/33255] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,833 INFO [Listener at localhost.localdomain/33255] regionserver.HRegion(2745): Flushing d44ae157b28d1a92c545df81ddcfec34 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:55:56,834 DEBUG [Listener at localhost.localdomain/33255] regionserver.HRegion(2446): Flush status journal for d44ae157b28d1a92c545df81ddcfec34: 2023-05-24 21:55:56,835 INFO [Listener at localhost.localdomain/33255] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,836 INFO [Listener at localhost.localdomain/33255] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.96 KB heapSize=5.48 KB 2023-05-24 21:55:56,837 WARN [RS_OPEN_META-regionserver/jenkins-hbase20:0-0.append-pool-0] wal.FSHLog$RingBufferEventHandler(1203): Append sequenceId=15, requesting roll of WAL java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,837 DEBUG [Listener at localhost.localdomain/33255] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 21:55:56,837 INFO [Listener at localhost.localdomain/33255] wal.TestLogRolling(551): org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=15, requesting roll of WAL at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1205) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1078) at org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:979) at com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:168) at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125) at java.lang.Thread.run(Thread.java:750) Caused by: java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,840 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:55:56,840 INFO [Listener at localhost.localdomain/33255] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 21:55:56,844 DEBUG [Listener at localhost.localdomain/33255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5ede761c to 127.0.0.1:51259 2023-05-24 21:55:56,844 DEBUG [Listener at localhost.localdomain/33255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:56,844 DEBUG [Listener at localhost.localdomain/33255] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:55:56,844 DEBUG [Listener at localhost.localdomain/33255] util.JVMClusterUtil(257): Found active master hash=1621857885, stopped=false 2023-05-24 21:55:56,844 INFO [Listener at localhost.localdomain/33255] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:56,846 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:56,846 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:56,846 INFO [Listener at localhost.localdomain/33255] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:55:56,846 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:56,846 DEBUG [Listener at localhost.localdomain/33255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3af1e826 to 127.0.0.1:51259 2023-05-24 21:55:56,847 DEBUG [Listener at localhost.localdomain/33255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:56,847 INFO [Listener at localhost.localdomain/33255] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43003,1684965302421' ***** 2023-05-24 21:55:56,847 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:56,847 INFO [Listener at localhost.localdomain/33255] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:55:56,847 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.TestLogRolling$7(456): preLogRoll: oldFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 newFile=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965356832 2023-05-24 21:55:56,847 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:56,848 INFO [RS:0;jenkins-hbase20:43003] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:55:56,848 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(374): Failed sync-before-close but no outstanding appends; closing WALorg.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=11, requesting roll of WAL 2023-05-24 21:55:56,848 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:55:56,848 INFO [RS:0;jenkins-hbase20:43003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:55:56,848 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965356832 2023-05-24 21:55:56,848 INFO [RS:0;jenkins-hbase20:43003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:55:56,848 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,848 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:56,849 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(462): Close of WAL hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772 failed. Cause="Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) ", errors=3, hasUnflushedEntries=false 2023-05-24 21:55:56,849 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.FSHLog(426): Failed close of WAL writer hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772, unflushedEntries=0 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,849 ERROR [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(221): Roll wal failed and waiting timeout, will not retry org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,849 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:56,850 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:56,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 66ba05d27f73fad98f00ea988b0205bf, disabling compactions & flushes 2023-05-24 21:55:56,850 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:56,850 DEBUG [RS:0;jenkins-hbase20:43003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x187d6cd4 to 127.0.0.1:51259 2023-05-24 21:55:56,850 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:56,851 DEBUG [RS:0;jenkins-hbase20:43003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:56,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:56,851 WARN [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(165): Failed to shutdown wal java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:41905,DS-7a01207b-a12f-4de2-af98-634f7571de3c,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1537) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. after waiting 0 ms 2023-05-24 21:55:56,851 INFO [RS:0;jenkins-hbase20:43003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:55:56,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:56,851 INFO [RS:0;jenkins-hbase20:43003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:55:56,851 INFO [RS:0;jenkins-hbase20:43003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:55:56,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 66ba05d27f73fad98f00ea988b0205bf 1/1 column families, dataSize=4.20 KB heapSize=4.98 KB 2023-05-24 21:55:56,851 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:55:56,851 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:56,853 WARN [WAL-Shutdown-0] wal.AbstractProtobufLogWriter(237): Failed to write trailer, non-fatal, continuing... java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DataStreamer$LastExceptionInStreamer.throwException4Close(DataStreamer.java:324) at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:151) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:105) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.writeWALTrailerAndMagic(ProtobufLogWriter.java:140) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.writeWALTrailer(AbstractProtobufLogWriter.java:234) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.close(ProtobufLogWriter.java:67) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doShutdown(FSHLog.java:492) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:951) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$2.call(AbstractFSWAL.java:946) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) 2023-05-24 21:55:56,853 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 21:55:56,853 WARN [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2760): Received unexpected exception trying to write ABORT_FLUSH marker to WAL: java.io.IOException: Cannot append; log is closed, regionName = TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1166) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.append(FSHLog.java:513) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendMarker(AbstractFSWAL.java:1228) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.doFullMarkerAppendTransaction(WALUtil.java:161) at org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeFlushMarker(WALUtil.java:89) at org.apache.hadoop.hbase.regionserver.HRegion.doAbortFlushToWAL(HRegion.java:2758) at org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(HRegion.java:2711) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2578) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2552) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2543) at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1733) at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1554) at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:105) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:102) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) in region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:56,853 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1478): Online Regions={66ba05d27f73fad98f00ea988b0205bf=TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf., d44ae157b28d1a92c545df81ddcfec34=hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34., 1588230740=hbase:meta,,1.1588230740} 2023-05-24 21:55:56,853 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:56,854 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1504): Waiting on 1588230740, 66ba05d27f73fad98f00ea988b0205bf, d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:56,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:56,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 66ba05d27f73fad98f00ea988b0205bf: 2023-05-24 21:55:56,853 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(159): ***** ABORTING region server jenkins-hbase20.apache.org,43003,1684965302421: Failed log close in log roller ***** org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,854 ERROR [regionserver/jenkins-hbase20:0.logRoller] helpers.MarkerIgnoringBase(143): RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint] 2023-05-24 21:55:56,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:56,854 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for java.lang:type=Memory 2023-05-24 21:55:56,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:56,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d44ae157b28d1a92c545df81ddcfec34, disabling compactions & flushes 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:56,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. after waiting 0 ms 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d44ae157b28d1a92c545df81ddcfec34: 2023-05-24 21:55:56,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:56,855 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=IPC 2023-05-24 21:55:56,856 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Replication 2023-05-24 21:55:56,856 DEBUG [regionserver/jenkins-hbase20:0.logRoller] util.JSONBean(130): Listing beans for Hadoop:service=HBase,name=RegionServer,sub=Server 2023-05-24 21:55:56,856 INFO [regionserver/jenkins-hbase20:0.logRoller] regionserver.HRegionServer(2555): Dump of metrics as JSON on abort: { "beans": [ { "name": "java.lang:type=Memory", "modelerType": "sun.management.MemoryImpl", "Verbose": false, "ObjectPendingFinalizationCount": 0, "HeapMemoryUsage": { "committed": 1103626240, "init": 524288000, "max": 2051014656, "used": 314633664 }, "NonHeapMemoryUsage": { "committed": 139026432, "init": 2555904, "max": -1, "used": 136509432 }, "ObjectName": "java.lang:type=Memory" } ], "beans": [], "beans": [], "beans": [] } 2023-05-24 21:55:56,856 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44937] master.MasterRpcServices(609): jenkins-hbase20.apache.org,43003,1684965302421 reported a fatal error: ***** ABORTING region server jenkins-hbase20.apache.org,43003,1684965302421: Failed log close in log roller ***** Cause: org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/WALs/jenkins-hbase20.apache.org,43003,1684965302421/jenkins-hbase20.apache.org%2C43003%2C1684965302421.1684965344772, unflushedEntries=0 at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:427) at org.apache.hadoop.hbase.regionserver.wal.FSHLog.doReplaceWriter(FSHLog.java:70) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.replaceWriter(AbstractFSWAL.java:828) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:884) at org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:304) at org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:211) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unexpected BlockUCState: BP-1287258583-148.251.75.209-1684965301740:blk_1073741841_1021 is UNDER_RECOVERY but not UNDER_CONSTRUCTION at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:4886) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:4955) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:954) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:992) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540) at org.apache.hadoop.ipc.Client.call(Client.java:1486) at org.apache.hadoop.ipc.Client.call(Client.java:1385) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy30.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:918) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy33.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361) at com.sun.proxy.$Proxy34.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.updateBlockForPipeline(DataStreamer.java:1602) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1479) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663) 2023-05-24 21:55:56,857 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(197): WAL FSHLog jenkins-hbase20.apache.org%2C43003%2C1684965302421.meta:.meta(num 1684965303046) roll requested 2023-05-24 21:55:56,857 DEBUG [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractFSWAL(874): WAL closed. Skipping rolling of writer 2023-05-24 21:55:57,054 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for 66ba05d27f73fad98f00ea988b0205bf 2023-05-24 21:55:57,054 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:57,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 66ba05d27f73fad98f00ea988b0205bf, disabling compactions & flushes 2023-05-24 21:55:57,054 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:55:57,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:57,055 DEBUG [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1504): Waiting on 1588230740, 66ba05d27f73fad98f00ea988b0205bf, d44ae157b28d1a92c545df81ddcfec34 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. after waiting 0 ms 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 66ba05d27f73fad98f00ea988b0205bf: 2023-05-24 21:55:57,055 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:57,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing TestLogRolling-testLogRollOnPipelineRestart,,1684965303769.66ba05d27f73fad98f00ea988b0205bf. 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d44ae157b28d1a92c545df81ddcfec34, disabling compactions & flushes 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:meta,,1.1588230740 2023-05-24 21:55:57,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. after waiting 0 ms 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:57,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d44ae157b28d1a92c545df81ddcfec34: 2023-05-24 21:55:57,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2539): Abort already in progress. Ignoring the current request with reason: Unrecoverable exception while closing hbase:namespace,,1684965303107.d44ae157b28d1a92c545df81ddcfec34. 2023-05-24 21:55:57,255 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1499): We were exiting though online regions are not empty, because some regions failed closing 2023-05-24 21:55:57,255 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43003,1684965302421; all regions closed. 2023-05-24 21:55:57,255 DEBUG [RS:0;jenkins-hbase20:43003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:57,255 INFO [RS:0;jenkins-hbase20:43003] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:57,255 INFO [RS:0;jenkins-hbase20:43003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-05-24 21:55:57,255 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:55:57,256 INFO [RS:0;jenkins-hbase20:43003] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43003 2023-05-24 21:55:57,259 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43003,1684965302421 2023-05-24 21:55:57,259 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:57,259 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:57,260 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43003,1684965302421] 2023-05-24 21:55:57,260 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43003,1684965302421; numProcessing=1 2023-05-24 21:55:57,261 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43003,1684965302421 already deleted, retry=false 2023-05-24 21:55:57,261 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43003,1684965302421 expired; onlineServers=0 2023-05-24 21:55:57,261 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44937,1684965302291' ***** 2023-05-24 21:55:57,261 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:55:57,262 DEBUG [M:0;jenkins-hbase20:44937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64940e90, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:55:57,262 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:57,262 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44937,1684965302291; all regions closed. 2023-05-24 21:55:57,262 DEBUG [M:0;jenkins-hbase20:44937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:55:57,262 DEBUG [M:0;jenkins-hbase20:44937] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:55:57,262 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:55:57,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965302636] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965302636,5,FailOnTimeoutGroup] 2023-05-24 21:55:57,262 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965302632] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965302632,5,FailOnTimeoutGroup] 2023-05-24 21:55:57,262 DEBUG [M:0;jenkins-hbase20:44937] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:55:57,264 INFO [M:0;jenkins-hbase20:44937] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:55:57,264 INFO [M:0;jenkins-hbase20:44937] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:55:57,264 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:57,264 INFO [M:0;jenkins-hbase20:44937] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:55:57,264 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:57,264 DEBUG [M:0;jenkins-hbase20:44937] master.HMaster(1512): Stopping service threads 2023-05-24 21:55:57,264 INFO [M:0;jenkins-hbase20:44937] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:55:57,265 ERROR [M:0;jenkins-hbase20:44937] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 21:55:57,265 INFO [M:0;jenkins-hbase20:44937] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:55:57,265 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:55:57,265 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:57,265 DEBUG [M:0;jenkins-hbase20:44937] zookeeper.ZKUtil(398): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:55:57,265 WARN [M:0;jenkins-hbase20:44937] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:55:57,265 INFO [M:0;jenkins-hbase20:44937] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:55:57,266 INFO [M:0;jenkins-hbase20:44937] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:55:57,267 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:55:57,267 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:57,267 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:57,267 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:55:57,267 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:57,267 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.18 KB heapSize=45.83 KB 2023-05-24 21:55:57,283 INFO [M:0;jenkins-hbase20:44937] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.18 KB at sequenceid=92 (bloomFilter=true), to=hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b90ff1bcfc354c47a6b1e53bdca91bc9 2023-05-24 21:55:57,288 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/b90ff1bcfc354c47a6b1e53bdca91bc9 as hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b90ff1bcfc354c47a6b1e53bdca91bc9 2023-05-24 21:55:57,293 INFO [M:0;jenkins-hbase20:44937] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36975/user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/b90ff1bcfc354c47a6b1e53bdca91bc9, entries=11, sequenceid=92, filesize=7.0 K 2023-05-24 21:55:57,294 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegion(2948): Finished flush of dataSize ~38.18 KB/39101, heapSize ~45.81 KB/46912, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=92, compaction requested=false 2023-05-24 21:55:57,296 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:57,296 DEBUG [M:0;jenkins-hbase20:44937] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:57,296 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/bcedebd2-ce1d-b3e2-4144-70371d94bcda/MasterData/WALs/jenkins-hbase20.apache.org,44937,1684965302291 2023-05-24 21:55:57,299 INFO [M:0;jenkins-hbase20:44937] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:55:57,299 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:55:57,300 INFO [M:0;jenkins-hbase20:44937] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44937 2023-05-24 21:55:57,302 DEBUG [M:0;jenkins-hbase20:44937] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44937,1684965302291 already deleted, retry=false 2023-05-24 21:55:57,361 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:57,361 INFO [RS:0;jenkins-hbase20:43003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43003,1684965302421; zookeeper connection closed. 2023-05-24 21:55:57,361 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): regionserver:43003-0x1017f7838230001, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:57,362 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7dd76c67] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7dd76c67 2023-05-24 21:55:57,370 INFO [Listener at localhost.localdomain/33255] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 21:55:57,461 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:57,461 DEBUG [Listener at localhost.localdomain/38795-EventThread] zookeeper.ZKWatcher(600): master:44937-0x1017f7838230000, quorum=127.0.0.1:51259, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:55:57,461 INFO [M:0;jenkins-hbase20:44937] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44937,1684965302291; zookeeper connection closed. 2023-05-24 21:55:57,462 WARN [Listener at localhost.localdomain/33255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:57,467 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:57,572 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:57,572 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 5c148da9-1ff3-4199-a124-1d8cb974f698) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:57,572 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data3/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:57,573 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data4/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:57,574 WARN [Listener at localhost.localdomain/33255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:55:57,578 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:55:57,685 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:55:57,686 WARN [BP-1287258583-148.251.75.209-1684965301740 heartbeating to localhost.localdomain/127.0.0.1:36975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1287258583-148.251.75.209-1684965301740 (Datanode Uuid 68c294e1-00a7-4ab0-ab48-5bb1da5d6fe7) service to localhost.localdomain/127.0.0.1:36975 2023-05-24 21:55:57,688 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data1/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:57,689 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/cluster_d6c77ebd-0411-2f4c-562b-0eeebd574ef4/dfs/data/data2/current/BP-1287258583-148.251.75.209-1684965301740] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:55:57,701 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:55:57,819 INFO [Listener at localhost.localdomain/33255] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:55:57,833 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:55:57,841 INFO [Listener at localhost.localdomain/33255] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnPipelineRestart Thread=85 (was 74) Potentially hanging thread: nioEventLoopGroup-28-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:36975 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost.localdomain/33255 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:36975 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (32137942) connection to localhost.localdomain/127.0.0.1:36975 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-4-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-29-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:36975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-26-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-27-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:36975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-28-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=462 (was 457) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=87 (was 123), ProcessCount=168 (was 168), AvailableMemoryMB=9553 (was 10204) 2023-05-24 21:55:57,848 INFO [Listener at localhost.localdomain/33255] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=85, OpenFileDescriptor=462, MaxFileDescriptor=60000, SystemLoadAverage=87, ProcessCount=168, AvailableMemoryMB=9552 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/hadoop.log.dir so I do NOT create it in target/test-data/191629fa-ad39-65c7-235d-0becdff395a0 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/bebcbf40-288c-d46f-95a8-33b66920ed95/hadoop.tmp.dir so I do NOT create it in target/test-data/191629fa-ad39-65c7-235d-0becdff395a0 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb, deleteOnExit=true 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/test.cache.data in system properties and HBase conf 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:55:57,849 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:55:57,850 DEBUG [Listener at localhost.localdomain/33255] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:55:57,850 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:55:57,851 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:55:57,852 INFO [Listener at localhost.localdomain/33255] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:55:57,853 WARN [Listener at localhost.localdomain/33255] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:55:57,854 WARN [Listener at localhost.localdomain/33255] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:55:57,854 WARN [Listener at localhost.localdomain/33255] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:55:57,881 WARN [Listener at localhost.localdomain/33255] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:57,883 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:57,888 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/java.io.tmpdir/Jetty_localhost_localdomain_40121_hdfs____.jycjqq/webapp 2023-05-24 21:55:57,961 INFO [Listener at localhost.localdomain/33255] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:40121 2023-05-24 21:55:57,962 WARN [Listener at localhost.localdomain/33255] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:55:57,963 WARN [Listener at localhost.localdomain/33255] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:55:57,963 WARN [Listener at localhost.localdomain/33255] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:55:57,990 WARN [Listener at localhost.localdomain/43781] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:58,006 WARN [Listener at localhost.localdomain/43781] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:58,008 WARN [Listener at localhost.localdomain/43781] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:58,009 INFO [Listener at localhost.localdomain/43781] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:58,015 INFO [Listener at localhost.localdomain/43781] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/java.io.tmpdir/Jetty_localhost_38077_datanode____.r7wutq/webapp 2023-05-24 21:55:58,087 INFO [Listener at localhost.localdomain/43781] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38077 2023-05-24 21:55:58,092 WARN [Listener at localhost.localdomain/32961] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:58,101 WARN [Listener at localhost.localdomain/32961] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:55:58,103 WARN [Listener at localhost.localdomain/32961] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:55:58,104 INFO [Listener at localhost.localdomain/32961] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:55:58,108 INFO [Listener at localhost.localdomain/32961] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/java.io.tmpdir/Jetty_localhost_41903_datanode____1foxxi/webapp 2023-05-24 21:55:58,166 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d2cb2683b443cb5: Processing first storage report for DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef from datanode 57025d95-4e1f-4e21-a0fc-1ef281259880 2023-05-24 21:55:58,166 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d2cb2683b443cb5: from storage DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef node DatanodeRegistration(127.0.0.1:38433, datanodeUuid=57025d95-4e1f-4e21-a0fc-1ef281259880, infoPort=46503, infoSecurePort=0, ipcPort=32961, storageInfo=lv=-57;cid=testClusterID;nsid=1031173042;c=1684965357856), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:58,166 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3d2cb2683b443cb5: Processing first storage report for DS-37c2f1fe-10ac-484f-831d-51f5607d66bc from datanode 57025d95-4e1f-4e21-a0fc-1ef281259880 2023-05-24 21:55:58,166 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3d2cb2683b443cb5: from storage DS-37c2f1fe-10ac-484f-831d-51f5607d66bc node DatanodeRegistration(127.0.0.1:38433, datanodeUuid=57025d95-4e1f-4e21-a0fc-1ef281259880, infoPort=46503, infoSecurePort=0, ipcPort=32961, storageInfo=lv=-57;cid=testClusterID;nsid=1031173042;c=1684965357856), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:58,197 INFO [Listener at localhost.localdomain/32961] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41903 2023-05-24 21:55:58,203 WARN [Listener at localhost.localdomain/39919] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:55:58,256 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae68635a54f67937: Processing first storage report for DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302 from datanode cf3187b2-1989-461f-9f0e-c3d3bec52ee6 2023-05-24 21:55:58,256 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae68635a54f67937: from storage DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302 node DatanodeRegistration(127.0.0.1:43231, datanodeUuid=cf3187b2-1989-461f-9f0e-c3d3bec52ee6, infoPort=41947, infoSecurePort=0, ipcPort=39919, storageInfo=lv=-57;cid=testClusterID;nsid=1031173042;c=1684965357856), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-05-24 21:55:58,256 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xae68635a54f67937: Processing first storage report for DS-3abb6d66-d2f9-4c1f-8e9b-8f9049b5696f from datanode cf3187b2-1989-461f-9f0e-c3d3bec52ee6 2023-05-24 21:55:58,256 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xae68635a54f67937: from storage DS-3abb6d66-d2f9-4c1f-8e9b-8f9049b5696f node DatanodeRegistration(127.0.0.1:43231, datanodeUuid=cf3187b2-1989-461f-9f0e-c3d3bec52ee6, infoPort=41947, infoSecurePort=0, ipcPort=39919, storageInfo=lv=-57;cid=testClusterID;nsid=1031173042;c=1684965357856), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:55:58,312 DEBUG [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0 2023-05-24 21:55:58,316 INFO [Listener at localhost.localdomain/39919] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/zookeeper_0, clientPort=56655, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:55:58,318 INFO [Listener at localhost.localdomain/39919] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56655 2023-05-24 21:55:58,318 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,321 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,336 INFO [Listener at localhost.localdomain/39919] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177 with version=8 2023-05-24 21:55:58,336 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:55:58,338 INFO [Listener at localhost.localdomain/39919] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:55:58,338 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,338 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,338 INFO [Listener at localhost.localdomain/39919] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:55:58,338 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,339 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:55:58,339 INFO [Listener at localhost.localdomain/39919] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:55:58,340 INFO [Listener at localhost.localdomain/39919] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44215 2023-05-24 21:55:58,341 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,342 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,343 INFO [Listener at localhost.localdomain/39919] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44215 connecting to ZooKeeper ensemble=127.0.0.1:56655 2023-05-24 21:55:58,348 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:442150x0, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:55:58,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44215-0x1017f7913170000 connected 2023-05-24 21:55:58,361 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:58,362 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:58,362 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:55:58,365 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44215 2023-05-24 21:55:58,365 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44215 2023-05-24 21:55:58,365 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44215 2023-05-24 21:55:58,365 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44215 2023-05-24 21:55:58,365 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44215 2023-05-24 21:55:58,365 INFO [Listener at localhost.localdomain/39919] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177, hbase.cluster.distributed=false 2023-05-24 21:55:58,375 INFO [Listener at localhost.localdomain/39919] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:55:58,376 INFO [Listener at localhost.localdomain/39919] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:55:58,377 INFO [Listener at localhost.localdomain/39919] ipc.NettyRpcServer(120): Bind to /148.251.75.209:34189 2023-05-24 21:55:58,377 INFO [Listener at localhost.localdomain/39919] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:55:58,378 DEBUG [Listener at localhost.localdomain/39919] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:55:58,378 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,379 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,380 INFO [Listener at localhost.localdomain/39919] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34189 connecting to ZooKeeper ensemble=127.0.0.1:56655 2023-05-24 21:55:58,391 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:341890x0, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:55:58,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34189-0x1017f7913170001 connected 2023-05-24 21:55:58,392 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:55:58,393 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:55:58,394 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:55:58,394 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34189 2023-05-24 21:55:58,394 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34189 2023-05-24 21:55:58,395 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34189 2023-05-24 21:55:58,397 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34189 2023-05-24 21:55:58,397 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34189 2023-05-24 21:55:58,398 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,406 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:55:58,406 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,407 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:58,407 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:55:58,408 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,408 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:55:58,409 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,44215,1684965358337 from backup master directory 2023-05-24 21:55:58,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:55:58,410 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,410 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:55:58,410 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:55:58,410 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,428 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/hbase.id with ID: 31157f9f-c229-4da8-b470-9befd1f1ff9a 2023-05-24 21:55:58,441 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:58,443 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,452 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x20026108 to 127.0.0.1:56655 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:58,457 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d085e15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:58,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:58,458 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:55:58,458 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:58,460 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store-tmp 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:55:58,468 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:58,468 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:55:58,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:58,469 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/WALs/jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,472 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44215%2C1684965358337, suffix=, logDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/WALs/jenkins-hbase20.apache.org,44215,1684965358337, archiveDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/oldWALs, maxLogs=10 2023-05-24 21:55:58,479 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/WALs/jenkins-hbase20.apache.org,44215,1684965358337/jenkins-hbase20.apache.org%2C44215%2C1684965358337.1684965358472 2023-05-24 21:55:58,479 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK], DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK]] 2023-05-24 21:55:58,479 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:58,479 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:58,479 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,480 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,482 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,484 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:55:58,485 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:55:58,486 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,486 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,487 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,490 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:55:58,492 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:58,492 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=793462, jitterRate=0.008939921855926514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:58,492 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:55:58,493 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:55:58,494 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:55:58,494 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:55:58,494 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:55:58,494 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 21:55:58,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 21:55:58,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:55:58,495 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:55:58,496 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:55:58,506 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:55:58,507 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:55:58,507 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:55:58,507 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:55:58,507 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:55:58,509 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,509 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:55:58,510 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:55:58,510 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:55:58,511 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:58,511 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:55:58,511 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,512 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,44215,1684965358337, sessionid=0x1017f7913170000, setting cluster-up flag (Was=false) 2023-05-24 21:55:58,514 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,516 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:55:58,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,519 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:55:58,522 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:58,523 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.hbase-snapshot/.tmp 2023-05-24 21:55:58,525 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:55:58,526 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,529 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965388529 2023-05-24 21:55:58,529 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:55:58,530 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:55:58,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:55:58,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:55:58,531 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:58,531 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:55:58,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:55:58,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:55:58,532 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965358531,5,FailOnTimeoutGroup] 2023-05-24 21:55:58,532 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965358532,5,FailOnTimeoutGroup] 2023-05-24 21:55:58,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:55:58,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,532 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,533 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:58,544 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:58,544 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:58,544 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177 2023-05-24 21:55:58,554 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:58,555 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:55:58,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/info 2023-05-24 21:55:58,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:55:58,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:55:58,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:58,559 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:55:58,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:55:58,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/table 2023-05-24 21:55:58,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:55:58,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,562 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740 2023-05-24 21:55:58,563 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740 2023-05-24 21:55:58,565 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:55:58,566 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:55:58,567 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:58,568 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=818532, jitterRate=0.04081849753856659}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:55:58,568 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:55:58,568 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:55:58,568 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:55:58,569 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:55:58,569 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:55:58,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:55:58,571 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:55:58,572 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:55:58,600 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(951): ClusterId : 31157f9f-c229-4da8-b470-9befd1f1ff9a 2023-05-24 21:55:58,601 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:55:58,604 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:55:58,604 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:55:58,608 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:55:58,610 DEBUG [RS:0;jenkins-hbase20:34189] zookeeper.ReadOnlyZKClient(139): Connect 0x112e6a3e to 127.0.0.1:56655 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:58,616 DEBUG [RS:0;jenkins-hbase20:34189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fe9966b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:58,616 DEBUG [RS:0;jenkins-hbase20:34189] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bd6014e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:55:58,628 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:34189 2023-05-24 21:55:58,628 INFO [RS:0;jenkins-hbase20:34189] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:55:58,629 INFO [RS:0;jenkins-hbase20:34189] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:55:58,629 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:55:58,629 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,44215,1684965358337 with isa=jenkins-hbase20.apache.org/148.251.75.209:34189, startcode=1684965358375 2023-05-24 21:55:58,629 DEBUG [RS:0;jenkins-hbase20:34189] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:55:58,632 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42991, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:55:58,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,633 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177 2023-05-24 21:55:58,633 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43781 2023-05-24 21:55:58,633 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:55:58,635 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:55:58,635 DEBUG [RS:0;jenkins-hbase20:34189] zookeeper.ZKUtil(162): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,635 WARN [RS:0;jenkins-hbase20:34189] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:55:58,635 INFO [RS:0;jenkins-hbase20:34189] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:58,635 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,34189,1684965358375] 2023-05-24 21:55:58,635 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,640 DEBUG [RS:0;jenkins-hbase20:34189] zookeeper.ZKUtil(162): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,641 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:55:58,641 INFO [RS:0;jenkins-hbase20:34189] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:55:58,642 INFO [RS:0;jenkins-hbase20:34189] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:55:58,642 INFO [RS:0;jenkins-hbase20:34189] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:55:58,642 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,644 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:55:58,645 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,646 DEBUG [RS:0;jenkins-hbase20:34189] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:55:58,647 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,647 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,647 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,657 INFO [RS:0;jenkins-hbase20:34189] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:55:58,657 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34189,1684965358375-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,666 INFO [RS:0;jenkins-hbase20:34189] regionserver.Replication(203): jenkins-hbase20.apache.org,34189,1684965358375 started 2023-05-24 21:55:58,666 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,34189,1684965358375, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:34189, sessionid=0x1017f7913170001 2023-05-24 21:55:58,666 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:55:58,666 DEBUG [RS:0;jenkins-hbase20:34189] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,666 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34189,1684965358375' 2023-05-24 21:55:58,666 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34189,1684965358375' 2023-05-24 21:55:58,667 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:55:58,668 DEBUG [RS:0;jenkins-hbase20:34189] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:55:58,668 DEBUG [RS:0;jenkins-hbase20:34189] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:55:58,668 INFO [RS:0;jenkins-hbase20:34189] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:55:58,668 INFO [RS:0;jenkins-hbase20:34189] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:55:58,722 DEBUG [jenkins-hbase20:44215] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:55:58,723 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,34189,1684965358375, state=OPENING 2023-05-24 21:55:58,724 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:55:58,725 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:58,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,34189,1684965358375}] 2023-05-24 21:55:58,726 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:55:58,727 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:55:58,771 INFO [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34189%2C1684965358375, suffix=, logDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375, archiveDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs, maxLogs=32 2023-05-24 21:55:58,783 INFO [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965358772 2023-05-24 21:55:58,784 DEBUG [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK], DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK]] 2023-05-24 21:55:58,881 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:58,881 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:55:58,884 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38322, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:55:58,890 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:55:58,891 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:55:58,893 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34189%2C1684965358375.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375, archiveDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs, maxLogs=32 2023-05-24 21:55:58,904 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.meta.1684965358893.meta 2023-05-24 21:55:58,904 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK], DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK]] 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:55:58,905 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:55:58,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:55:58,907 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:55:58,908 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/info 2023-05-24 21:55:58,908 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/info 2023-05-24 21:55:58,908 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:55:58,909 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,909 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:55:58,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:58,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:55:58,911 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:55:58,912 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,912 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:55:58,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/table 2023-05-24 21:55:58,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/table 2023-05-24 21:55:58,913 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:55:58,914 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:58,915 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740 2023-05-24 21:55:58,916 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740 2023-05-24 21:55:58,918 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:55:58,920 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:55:58,921 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=817703, jitterRate=0.039763301610946655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:55:58,921 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:55:58,923 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965358881 2023-05-24 21:55:58,928 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:55:58,929 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:55:58,929 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,34189,1684965358375, state=OPEN 2023-05-24 21:55:58,931 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:55:58,931 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:55:58,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:55:58,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,34189,1684965358375 in 205 msec 2023-05-24 21:55:58,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:55:58,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 364 msec 2023-05-24 21:55:58,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 411 msec 2023-05-24 21:55:58,937 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965358937, completionTime=-1 2023-05-24 21:55:58,937 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:55:58,937 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:55:58,941 DEBUG [hconnection-0x3b05c66b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:55:58,945 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38330, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:55:58,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:55:58,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965418946 2023-05-24 21:55:58,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965478946 2023-05-24 21:55:58,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 9 msec 2023-05-24 21:55:58,952 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44215,1684965358337-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44215,1684965358337-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44215,1684965358337-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:44215, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:55:58,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:58,954 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:55:58,956 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:55:58,956 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:55:58,957 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:55:58,960 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:58,960 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd empty. 2023-05-24 21:55:58,961 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:58,961 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:55:58,980 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:58,981 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d18cb4cfbaeda921d1fc57381c52fdd, NAME => 'hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2d18cb4cfbaeda921d1fc57381c52fdd, disabling compactions & flushes 2023-05-24 21:55:58,996 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. after waiting 0 ms 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:58,996 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:58,996 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2d18cb4cfbaeda921d1fc57381c52fdd: 2023-05-24 21:55:58,999 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:55:59,000 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965359000"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965359000"}]},"ts":"1684965359000"} 2023-05-24 21:55:59,002 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:55:59,003 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:55:59,004 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965359004"}]},"ts":"1684965359004"} 2023-05-24 21:55:59,005 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:55:59,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2d18cb4cfbaeda921d1fc57381c52fdd, ASSIGN}] 2023-05-24 21:55:59,011 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2d18cb4cfbaeda921d1fc57381c52fdd, ASSIGN 2023-05-24 21:55:59,012 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2d18cb4cfbaeda921d1fc57381c52fdd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,34189,1684965358375; forceNewPlan=false, retain=false 2023-05-24 21:55:59,163 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2d18cb4cfbaeda921d1fc57381c52fdd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:59,163 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965359163"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965359163"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965359163"}]},"ts":"1684965359163"} 2023-05-24 21:55:59,169 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 2d18cb4cfbaeda921d1fc57381c52fdd, server=jenkins-hbase20.apache.org,34189,1684965358375}] 2023-05-24 21:55:59,328 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:59,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d18cb4cfbaeda921d1fc57381c52fdd, NAME => 'hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:59,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:59,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,331 INFO [StoreOpener-2d18cb4cfbaeda921d1fc57381c52fdd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,334 DEBUG [StoreOpener-2d18cb4cfbaeda921d1fc57381c52fdd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/info 2023-05-24 21:55:59,335 DEBUG [StoreOpener-2d18cb4cfbaeda921d1fc57381c52fdd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/info 2023-05-24 21:55:59,335 INFO [StoreOpener-2d18cb4cfbaeda921d1fc57381c52fdd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d18cb4cfbaeda921d1fc57381c52fdd columnFamilyName info 2023-05-24 21:55:59,336 INFO [StoreOpener-2d18cb4cfbaeda921d1fc57381c52fdd-1] regionserver.HStore(310): Store=2d18cb4cfbaeda921d1fc57381c52fdd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:59,338 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:55:59,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:59,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 2d18cb4cfbaeda921d1fc57381c52fdd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=744287, jitterRate=-0.05359111726284027}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:59,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 2d18cb4cfbaeda921d1fc57381c52fdd: 2023-05-24 21:55:59,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd., pid=6, masterSystemTime=1684965359323 2023-05-24 21:55:59,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:59,354 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:55:59,355 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2d18cb4cfbaeda921d1fc57381c52fdd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:59,355 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965359355"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965359355"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965359355"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965359355"}]},"ts":"1684965359355"} 2023-05-24 21:55:59,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:55:59,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 2d18cb4cfbaeda921d1fc57381c52fdd, server=jenkins-hbase20.apache.org,34189,1684965358375 in 187 msec 2023-05-24 21:55:59,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:55:59,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2d18cb4cfbaeda921d1fc57381c52fdd, ASSIGN in 350 msec 2023-05-24 21:55:59,362 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:55:59,362 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965359362"}]},"ts":"1684965359362"} 2023-05-24 21:55:59,364 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:55:59,366 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:55:59,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 413 msec 2023-05-24 21:55:59,457 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:55:59,458 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:59,458 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:59,462 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:55:59,476 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:59,480 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-05-24 21:55:59,484 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:55:59,491 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:55:59,494 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-05-24 21:55:59,497 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:55:59,497 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.088sec 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44215,1684965358337-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:55:59,498 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44215,1684965358337-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:55:59,499 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ReadOnlyZKClient(139): Connect 0x09f2267d to 127.0.0.1:56655 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:55:59,499 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:55:59,502 DEBUG [Listener at localhost.localdomain/39919] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ecbc9c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:55:59,503 DEBUG [hconnection-0x552ec299-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:55:59,505 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:55:59,506 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:55:59,507 INFO [Listener at localhost.localdomain/39919] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:55:59,509 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:55:59,510 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:55:59,510 INFO [Listener at localhost.localdomain/39919] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:55:59,512 DEBUG [Listener at localhost.localdomain/39919] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 21:55:59,515 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:49012, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 21:55:59,517 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 21:55:59,517 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 21:55:59,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:55:59,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:55:59,521 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:55:59,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testCompactionRecordDoesntBlockRolling" procId is: 9 2023-05-24 21:55:59,522 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:55:59,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:55:59,524 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,524 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229 empty. 2023-05-24 21:55:59,525 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,525 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testCompactionRecordDoesntBlockRolling regions 2023-05-24 21:55:59,537 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 21:55:59,538 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4962a5e0f5751ff33c169408e36a3229, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testCompactionRecordDoesntBlockRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/.tmp 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1604): Closing 4962a5e0f5751ff33c169408e36a3229, disabling compactions & flushes 2023-05-24 21:55:59,545 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. after waiting 0 ms 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,545 INFO [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,545 DEBUG [RegionOpenAndInit-TestLogRolling-testCompactionRecordDoesntBlockRolling-pool-0] regionserver.HRegion(1558): Region close journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:55:59,547 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:55:59,548 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684965359548"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965359548"}]},"ts":"1684965359548"} 2023-05-24 21:55:59,550 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:55:59,551 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:55:59,551 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965359551"}]},"ts":"1684965359551"} 2023-05-24 21:55:59,552 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLING in hbase:meta 2023-05-24 21:55:59,557 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4962a5e0f5751ff33c169408e36a3229, ASSIGN}] 2023-05-24 21:55:59,559 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4962a5e0f5751ff33c169408e36a3229, ASSIGN 2023-05-24 21:55:59,560 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4962a5e0f5751ff33c169408e36a3229, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,34189,1684965358375; forceNewPlan=false, retain=false 2023-05-24 21:55:59,711 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4962a5e0f5751ff33c169408e36a3229, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:59,711 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684965359711"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965359711"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965359711"}]},"ts":"1684965359711"} 2023-05-24 21:55:59,713 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4962a5e0f5751ff33c169408e36a3229, server=jenkins-hbase20.apache.org,34189,1684965358375}] 2023-05-24 21:55:59,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4962a5e0f5751ff33c169408e36a3229, NAME => 'TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:55:59,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testCompactionRecordDoesntBlockRolling 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:55:59,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,876 INFO [StoreOpener-4962a5e0f5751ff33c169408e36a3229-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,878 DEBUG [StoreOpener-4962a5e0f5751ff33c169408e36a3229-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info 2023-05-24 21:55:59,878 DEBUG [StoreOpener-4962a5e0f5751ff33c169408e36a3229-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info 2023-05-24 21:55:59,879 INFO [StoreOpener-4962a5e0f5751ff33c169408e36a3229-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4962a5e0f5751ff33c169408e36a3229 columnFamilyName info 2023-05-24 21:55:59,879 INFO [StoreOpener-4962a5e0f5751ff33c169408e36a3229-1] regionserver.HStore(310): Store=4962a5e0f5751ff33c169408e36a3229/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:55:59,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,881 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:55:59,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:55:59,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 4962a5e0f5751ff33c169408e36a3229; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=866116, jitterRate=0.10132394731044769}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:55:59,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:55:59,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229., pid=11, masterSystemTime=1684965359866 2023-05-24 21:55:59,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:55:59,892 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4962a5e0f5751ff33c169408e36a3229, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:55:59,892 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.","families":{"info":[{"qualifier":"regioninfo","vlen":87,"tag":[],"timestamp":"1684965359892"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965359892"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965359892"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965359892"}]},"ts":"1684965359892"} 2023-05-24 21:55:59,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 21:55:59,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4962a5e0f5751ff33c169408e36a3229, server=jenkins-hbase20.apache.org,34189,1684965358375 in 181 msec 2023-05-24 21:55:59,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 21:55:59,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling, region=4962a5e0f5751ff33c169408e36a3229, ASSIGN in 341 msec 2023-05-24 21:55:59,902 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:55:59,902 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testCompactionRecordDoesntBlockRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965359902"}]},"ts":"1684965359902"} 2023-05-24 21:55:59,904 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testCompactionRecordDoesntBlockRolling, state=ENABLED in hbase:meta 2023-05-24 21:55:59,907 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:55:59,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testCompactionRecordDoesntBlockRolling in 390 msec 2023-05-24 21:56:04,462 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:56:04,641 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:09,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:56:09,524 INFO [Listener at localhost.localdomain/39919] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testCompactionRecordDoesntBlockRolling, procId: 9 completed 2023-05-24 21:56:09,529 DEBUG [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:09,530 DEBUG [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:09,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 21:56:09,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(165): Submitting procedure hbase:namespace 2023-05-24 21:56:09,550 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'hbase:namespace' 2023-05-24 21:56:09,550 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:09,551 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'hbase:namespace' starting 'acquire' 2023-05-24 21:56:09,551 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'hbase:namespace', kicking off acquire phase on members. 2023-05-24 21:56:09,551 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,551 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 21:56:09,552 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:09,552 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,552 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:09,552 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:09,552 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,552 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 21:56:09,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 21:56:09,553 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 21:56:09,553 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 21:56:09,554 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for hbase:namespace 2023-05-24 21:56:09,556 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:hbase:namespace 2023-05-24 21:56:09,556 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'hbase:namespace' with timeout 60000ms 2023-05-24 21:56:09,556 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:09,557 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'hbase:namespace' starting 'acquire' stage 2023-05-24 21:56:09,561 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 21:56:09,561 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:56:09,562 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 21:56:09,562 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. started... 2023-05-24 21:56:09,562 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 2d18cb4cfbaeda921d1fc57381c52fdd 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:56:09,577 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/.tmp/info/d672e7e7e07f4b9b8dd8ecddfdb03d99 2023-05-24 21:56:09,586 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/.tmp/info/d672e7e7e07f4b9b8dd8ecddfdb03d99 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/info/d672e7e7e07f4b9b8dd8ecddfdb03d99 2023-05-24 21:56:09,592 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/info/d672e7e7e07f4b9b8dd8ecddfdb03d99, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 21:56:09,592 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 2d18cb4cfbaeda921d1fc57381c52fdd in 30ms, sequenceid=6, compaction requested=false 2023-05-24 21:56:09,593 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 2d18cb4cfbaeda921d1fc57381c52fdd: 2023-05-24 21:56:09,593 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:56:09,593 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 21:56:09,593 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 21:56:09,593 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,593 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'hbase:namespace' locally acquired 2023-05-24 21:56:09,593 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure (hbase:namespace) in zk 2023-05-24 21:56:09,595 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,595 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:09,595 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:09,595 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,595 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'hbase:namespace' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 21:56:09,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:09,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:09,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,596 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,597 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:09,597 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure 'hbase:namespace' on coordinator 2023-05-24 21:56:09,597 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'hbase:namespace' starting 'in-barrier' execution. 2023-05-24 21:56:09,597 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6e549d3d[Count = 0] remaining members to acquire global barrier 2023-05-24 21:56:09,597 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,598 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,598 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,598 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,599 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'hbase:namespace' received 'reached' from coordinator. 2023-05-24 21:56:09,599 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,599 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'hbase:namespace' locally completed 2023-05-24 21:56:09,599 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 21:56:09,599 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'hbase:namespace' completed for member 'jenkins-hbase20.apache.org,34189,1684965358375' in zk 2023-05-24 21:56:09,600 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,600 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'hbase:namespace' has notified controller of completion 2023-05-24 21:56:09,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:09,600 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:09,600 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:09,600 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'hbase:namespace' completed. 2023-05-24 21:56:09,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:09,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:09,601 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:09,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,602 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,603 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'hbase:namespace' member 'jenkins-hbase20.apache.org,34189,1684965358375': 2023-05-24 21:56:09,603 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'hbase:namespace' execution completed 2023-05-24 21:56:09,603 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 21:56:09,603 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' released barrier for procedure'hbase:namespace', counting down latch. Waiting for 0 more 2023-05-24 21:56:09,603 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 21:56:09,603 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:hbase:namespace 2023-05-24 21:56:09,603 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure hbase:namespaceincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 21:56:09,605 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,605 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:09,605 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,605 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:09,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:09,605 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:09,605 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:09,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:09,606 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:09,606 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,607 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:09,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----hbase:namespace 2023-05-24 21:56:09,607 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:09,614 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/hbase:namespace 2023-05-24 21:56:09,614 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:09,614 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:09,614 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:09,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'hbase:namespace' 2023-05-24 21:56:09,615 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 21:56:09,615 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/hbase:namespace 2023-05-24 21:56:09,615 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/hbase:namespace 2023-05-24 21:56:09,617 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:09,617 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : hbase:namespace'' to complete. (max 20000 ms per retry) 2023-05-24 21:56:09,617 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:09,617 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 21:56:19,617 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 21:56:19,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 21:56:19,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 21:56:19,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,644 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:19,644 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:19,644 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 21:56:19,645 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 21:56:19,645 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,645 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,646 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,646 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:19,646 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:19,646 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:19,647 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,647 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 21:56:19,647 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,648 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 21:56:19,648 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,648 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,648 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,650 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 21:56:19,650 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:19,651 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 21:56:19,651 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 21:56:19,651 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 21:56:19,651 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:19,651 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. started... 2023-05-24 21:56:19,652 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4962a5e0f5751ff33c169408e36a3229 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:56:19,672 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=5 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/af0347b1a8d24416b6795ac4db75b2f6 2023-05-24 21:56:19,680 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/af0347b1a8d24416b6795ac4db75b2f6 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6 2023-05-24 21:56:19,690 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6, entries=1, sequenceid=5, filesize=5.8 K 2023-05-24 21:56:19,691 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4962a5e0f5751ff33c169408e36a3229 in 39ms, sequenceid=5, compaction requested=false 2023-05-24 21:56:19,692 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:56:19,692 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:19,692 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 21:56:19,692 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 21:56:19,692 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,692 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 21:56:19,692 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 21:56:19,693 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,693 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:19,694 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:19,694 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:19,694 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,695 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,695 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:19,695 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 21:56:19,695 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6a6b042f[Count = 0] remaining members to acquire global barrier 2023-05-24 21:56:19,695 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 21:56:19,695 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,696 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,696 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,696 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,696 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 21:56:19,696 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 21:56:19,696 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,34189,1684965358375' in zk 2023-05-24 21:56:19,696 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,696 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 21:56:19,698 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,698 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 21:56:19,698 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,698 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:19,698 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:19,698 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:19,698 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 21:56:19,699 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:19,699 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:19,700 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,700 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,700 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:19,700 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,701 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,701 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,34189,1684965358375': 2023-05-24 21:56:19,701 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 21:56:19,701 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 21:56:19,701 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 21:56:19,701 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 21:56:19,701 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,701 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 21:56:19,703 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,703 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,703 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:19,703 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:19,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:19,703 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:19,704 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:19,704 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:19,704 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:19,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,716 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:19,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,718 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,719 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,719 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:19,719 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:19,719 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 21:56:19,719 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:19,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 21:56:19,719 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:19,719 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:19,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:19,719 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:19,720 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,720 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,720 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 21:56:19,720 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 21:56:19,720 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:19,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:19,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:29,720 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 21:56:29,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 21:56:29,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 21:56:29,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 21:56:29,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,738 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:29,738 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:29,738 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 21:56:29,738 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 21:56:29,739 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,739 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,740 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:29,740 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,740 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:29,741 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:29,741 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,741 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 21:56:29,741 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,742 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 21:56:29,742 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,742 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,742 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 21:56:29,742 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,742 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 21:56:29,742 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:29,743 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 21:56:29,743 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 21:56:29,743 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 21:56:29,743 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:29,743 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. started... 2023-05-24 21:56:29,743 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4962a5e0f5751ff33c169408e36a3229 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:56:29,758 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/2554a4d494d24723ad97db325d3cb3a9 2023-05-24 21:56:29,767 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/2554a4d494d24723ad97db325d3cb3a9 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9 2023-05-24 21:56:29,776 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9, entries=1, sequenceid=9, filesize=5.8 K 2023-05-24 21:56:29,777 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4962a5e0f5751ff33c169408e36a3229 in 34ms, sequenceid=9, compaction requested=false 2023-05-24 21:56:29,777 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:56:29,777 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:29,777 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 21:56:29,777 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 21:56:29,777 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,777 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 21:56:29,777 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 21:56:29,779 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,779 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:29,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:29,779 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,779 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 21:56:29,779 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:29,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:29,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:29,781 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 21:56:29,781 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@2f699706[Count = 0] remaining members to acquire global barrier 2023-05-24 21:56:29,781 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 21:56:29,781 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,782 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,782 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,782 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,782 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 21:56:29,782 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 21:56:29,782 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,34189,1684965358375' in zk 2023-05-24 21:56:29,782 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,782 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 21:56:29,783 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 21:56:29,783 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,783 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:29,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:29,784 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:29,784 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 21:56:29,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:29,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:29,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,787 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:29,788 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,788 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,789 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,34189,1684965358375': 2023-05-24 21:56:29,789 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 21:56:29,789 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 21:56:29,789 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 21:56:29,789 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 21:56:29,789 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,789 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 21:56:29,797 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,797 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,797 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,797 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:29,797 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,797 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,797 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:29,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:29,798 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:29,798 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:29,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:29,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,798 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:29,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,799 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,799 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:29,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,800 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:29,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,806 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:29,806 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:29,806 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 21:56:29,806 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:29,806 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:29,807 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 21:56:29,807 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,807 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,807 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:29,807 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:29,807 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:39,807 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 21:56:39,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 21:56:39,829 INFO [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965358772 with entries=13, filesize=6.44 KB; new WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965399814 2023-05-24 21:56:39,830 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK], DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK]] 2023-05-24 21:56:39,830 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965358772 is not closed yet, will try archiving it next time 2023-05-24 21:56:39,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 21:56:39,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 21:56:39,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,838 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:39,838 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:39,838 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 21:56:39,838 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 21:56:39,839 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,839 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,840 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,840 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:39,840 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:39,840 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:39,840 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,840 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 21:56:39,840 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,841 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 21:56:39,841 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,841 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,841 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 21:56:39,841 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,841 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 21:56:39,841 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:39,841 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 21:56:39,841 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 21:56:39,842 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 21:56:39,842 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:39,842 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. started... 2023-05-24 21:56:39,842 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4962a5e0f5751ff33c169408e36a3229 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:56:39,856 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=13 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/8dee33d6675f4025ab03ce7302896ce9 2023-05-24 21:56:39,865 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/8dee33d6675f4025ab03ce7302896ce9 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9 2023-05-24 21:56:39,870 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9, entries=1, sequenceid=13, filesize=5.8 K 2023-05-24 21:56:39,871 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4962a5e0f5751ff33c169408e36a3229 in 29ms, sequenceid=13, compaction requested=true 2023-05-24 21:56:39,871 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:56:39,872 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:39,872 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 21:56:39,872 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 21:56:39,872 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,872 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 21:56:39,872 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 21:56:39,873 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,873 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,874 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,874 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:39,874 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:39,874 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,874 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 21:56:39,874 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:39,874 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:39,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,875 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:39,875 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 21:56:39,875 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@547bfc23[Count = 0] remaining members to acquire global barrier 2023-05-24 21:56:39,876 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 21:56:39,876 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,876 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,876 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,876 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,877 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 21:56:39,877 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 21:56:39,877 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,877 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 21:56:39,877 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,34189,1684965358375' in zk 2023-05-24 21:56:39,878 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 21:56:39,878 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,878 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:39,878 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,879 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:39,879 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:39,878 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 21:56:39,879 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:39,879 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:39,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:39,880 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,881 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,881 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,34189,1684965358375': 2023-05-24 21:56:39,881 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 21:56:39,881 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 21:56:39,881 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 21:56:39,881 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 21:56:39,881 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,881 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 21:56:39,882 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,882 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,882 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,882 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:39,882 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:39,882 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,882 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,882 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:39,883 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,883 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:39,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:39,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:39,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,883 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:39,884 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,884 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,884 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,884 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:39,884 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,885 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,886 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,886 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:39,886 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,886 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:39,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:39,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 21:56:39,886 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:39,887 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:39,887 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 21:56:39,888 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(398): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Unable to get data of znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling because node does not exist (not an error) 2023-05-24 21:56:39,887 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,888 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:39,888 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:39,888 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:49,888 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 21:56:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 21:56:49,891 DEBUG [Listener at localhost.localdomain/39919] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:56:49,901 DEBUG [Listener at localhost.localdomain/39919] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 17769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:56:49,901 DEBUG [Listener at localhost.localdomain/39919] regionserver.HStore(1912): 4962a5e0f5751ff33c169408e36a3229/info is initiating minor compaction (all files) 2023-05-24 21:56:49,901 INFO [Listener at localhost.localdomain/39919] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:56:49,901 INFO [Listener at localhost.localdomain/39919] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:56:49,901 INFO [Listener at localhost.localdomain/39919] regionserver.HRegion(2259): Starting compaction of 4962a5e0f5751ff33c169408e36a3229/info in TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:49,901 INFO [Listener at localhost.localdomain/39919] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9] into tmpdir=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp, totalSize=17.4 K 2023-05-24 21:56:49,902 DEBUG [Listener at localhost.localdomain/39919] compactions.Compactor(207): Compacting af0347b1a8d24416b6795ac4db75b2f6, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=5, earliestPutTs=1684965379635 2023-05-24 21:56:49,903 DEBUG [Listener at localhost.localdomain/39919] compactions.Compactor(207): Compacting 2554a4d494d24723ad97db325d3cb3a9, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1684965389725 2023-05-24 21:56:49,903 DEBUG [Listener at localhost.localdomain/39919] compactions.Compactor(207): Compacting 8dee33d6675f4025ab03ce7302896ce9, keycount=1, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=13, earliestPutTs=1684965399811 2023-05-24 21:56:49,916 INFO [Listener at localhost.localdomain/39919] throttle.PressureAwareThroughputController(145): 4962a5e0f5751ff33c169408e36a3229#info#compaction#19 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:56:49,933 DEBUG [Listener at localhost.localdomain/39919] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/6207cd38920d48539e57fa48c4763a55 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/6207cd38920d48539e57fa48c4763a55 2023-05-24 21:56:49,942 INFO [Listener at localhost.localdomain/39919] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4962a5e0f5751ff33c169408e36a3229/info of 4962a5e0f5751ff33c169408e36a3229 into 6207cd38920d48539e57fa48c4763a55(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:56:49,942 DEBUG [Listener at localhost.localdomain/39919] regionserver.HRegion(2289): Compaction status journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:56:49,954 INFO [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965399814 with entries=4, filesize=2.45 KB; new WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965409944 2023-05-24 21:56:49,954 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK], DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK]] 2023-05-24 21:56:49,955 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965399814 is not closed yet, will try archiving it next time 2023-05-24 21:56:49,955 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965358772 to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965358772 2023-05-24 21:56:49,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(933): Client=jenkins//148.251.75.209 procedure request for: flush-table-proc 2023-05-24 21:56:49,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(143): Procedure TestLogRolling-testCompactionRecordDoesntBlockRolling was in running list but was completed. Accepting new attempt. 2023-05-24 21:56:49,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] procedure.ProcedureCoordinator(165): Submitting procedure TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,964 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:49,964 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:49,965 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' 2023-05-24 21:56:49,965 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling', kicking off acquire phase on members. 2023-05-24 21:56:49,965 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,965 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,967 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:49,968 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:49,968 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:49,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:49,968 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:49,968 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-05-24 21:56:49,968 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,969 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,969 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 4 2023-05-24 21:56:49,969 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,969 DEBUG [zk-event-processor-pool-0] flush.RegionServerFlushTableProcedureManager(153): Launching subprocedure to flush regions for TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,969 WARN [zk-event-processor-pool-0] procedure.ProcedureMember(133): A completed old subproc TestLogRolling-testCompactionRecordDoesntBlockRolling is still present, removing 2023-05-24 21:56:49,969 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:49,970 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' with timeout 60000ms 2023-05-24 21:56:49,970 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 60000 ms 2023-05-24 21:56:49,970 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'acquire' stage 2023-05-24 21:56:49,974 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.FlushTableSubprocedure(113): Flush region tasks submitted for 1 regions 2023-05-24 21:56:49,974 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(242): Waiting for local region flush to finish. 2023-05-24 21:56:49,974 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(69): Starting region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:49,974 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(72): Flush region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. started... 2023-05-24 21:56:49,974 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2745): Flushing 4962a5e0f5751ff33c169408e36a3229 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:56:49,988 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=18 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/90abb70e17a84374a993058755576b9b 2023-05-24 21:56:49,994 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/90abb70e17a84374a993058755576b9b as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/90abb70e17a84374a993058755576b9b 2023-05-24 21:56:50,000 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/90abb70e17a84374a993058755576b9b, entries=1, sequenceid=18, filesize=5.8 K 2023-05-24 21:56:50,001 INFO [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4962a5e0f5751ff33c169408e36a3229 in 27ms, sequenceid=18, compaction requested=false 2023-05-24 21:56:50,001 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] regionserver.HRegion(2446): Flush status journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:56:50,001 DEBUG [rs(jenkins-hbase20.apache.org,34189,1684965358375)-flush-proc-pool-0] flush.FlushTableSubprocedure$RegionFlushTask(80): Closing region operation on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:56:50,001 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(253): Completed 1/1 local region flush tasks. 2023-05-24 21:56:50,001 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(255): Completed 1 local region flush tasks. 2023-05-24 21:56:50,001 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] flush.RegionServerFlushTableProcedureManager$FlushTableSubprocedurePool(287): cancelling 0 flush region tasks jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,001 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally acquired 2023-05-24 21:56:50,001 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure (TestLogRolling-testCompactionRecordDoesntBlockRolling) in zk 2023-05-24 21:56:50,003 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,003 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,003 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,003 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:50,003 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,003 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-05-24 21:56:50,003 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:50,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:50,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:50,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,004 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,005 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:50,005 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase20.apache.org,34189,1684965358375' joining acquired barrier for procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' on coordinator 2023-05-24 21:56:50,005 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@1e8c3ee4[Count = 0] remaining members to acquire global barrier 2023-05-24 21:56:50,005 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' starting 'in-barrier' execution. 2023-05-24 21:56:50,005 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,006 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,006 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,006 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,006 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' received 'reached' from coordinator. 2023-05-24 21:56:50,006 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' locally completed 2023-05-24 21:56:50,006 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed for member 'jenkins-hbase20.apache.org,34189,1684965358375' in zk 2023-05-24 21:56:50,006 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,006 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-05-24 21:56:50,007 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' has notified controller of completion 2023-05-24 21:56:50,007 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,007 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:50,007 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:50,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:50,007 DEBUG [member: 'jenkins-hbase20.apache.org,34189,1684965358375' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' completed. 2023-05-24 21:56:50,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:50,008 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:50,009 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,009 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,009 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:50,009 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,010 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,010 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' member 'jenkins-hbase20.apache.org,34189,1684965358375': 2023-05-24 21:56:50,010 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase20.apache.org,34189,1684965358375' released barrier for procedure'TestLogRolling-testCompactionRecordDoesntBlockRolling', counting down latch. Waiting for 0 more 2023-05-24 21:56:50,010 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'TestLogRolling-testCompactionRecordDoesntBlockRolling' execution completed 2023-05-24 21:56:50,010 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-05-24 21:56:50,010 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-05-24 21:56:50,010 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,010 INFO [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure TestLogRolling-testCompactionRecordDoesntBlockRollingincluding nodes /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2023-05-24 21:56:50,011 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,011 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,011 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,011 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/flush-table-proc 2023-05-24 21:56:50,011 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,011 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-05-24 21:56:50,012 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:56:50,012 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,012 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-05-24 21:56:50,013 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,013 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,013 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on existing znode=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,013 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-05-24 21:56:50,014 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,014 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,029 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired 2023-05-24 21:56:50,029 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,029 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/flush-table-proc/acquired 2023-05-24 21:56:50,029 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(180): Done waiting - exec procedure flush-table-proc for 'TestLogRolling-testCompactionRecordDoesntBlockRolling' 2023-05-24 21:56:50,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] flush.MasterFlushTableProcedureManager(182): Master flush table procedure is successful! 2023-05-24 21:56:50,029 DEBUG [(jenkins-hbase20.apache.org,44215,1684965358337)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-05-24 21:56:50,029 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/abort 2023-05-24 21:56:50,029 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/acquired/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,029 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2690): Waiting a max of 300000 ms for procedure 'flush-table-proc : TestLogRolling-testCompactionRecordDoesntBlockRolling'' to complete. (max 20000 ms per retry) 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2698): (#1) Sleeping: 10000ms while waiting for procedure completion. 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/reached/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,030 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/flush-table-proc/abort/TestLogRolling-testCompactionRecordDoesntBlockRolling 2023-05-24 21:56:50,030 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/flush-table-proc/abort 2023-05-24 21:56:50,030 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:57:00,030 DEBUG [Listener at localhost.localdomain/39919] client.HBaseAdmin(2704): Getting current status of procedure from master... 2023-05-24 21:57:00,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44215] master.MasterRpcServices(1186): Checking to see if procedure from request:flush-table-proc is done 2023-05-24 21:57:00,046 INFO [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965409944 with entries=3, filesize=1.97 KB; new WAL /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965420036 2023-05-24 21:57:00,046 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43231,DS-c8c7897a-2b41-46b7-9fa3-6f838b3f1302,DISK], DatanodeInfoWithStorage[127.0.0.1:38433,DS-c6797a4f-a8c4-4460-8a22-c2ed339a5aef,DISK]] 2023-05-24 21:57:00,046 DEBUG [Listener at localhost.localdomain/39919] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965409944 is not closed yet, will try archiving it next time 2023-05-24 21:57:00,046 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:57:00,046 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965399814 to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs/jenkins-hbase20.apache.org%2C34189%2C1684965358375.1684965399814 2023-05-24 21:57:00,046 INFO [Listener at localhost.localdomain/39919] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 21:57:00,046 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09f2267d to 127.0.0.1:56655 2023-05-24 21:57:00,047 DEBUG [Listener at localhost.localdomain/39919] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:57:00,047 DEBUG [Listener at localhost.localdomain/39919] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:57:00,047 DEBUG [Listener at localhost.localdomain/39919] util.JVMClusterUtil(257): Found active master hash=97222048, stopped=false 2023-05-24 21:57:00,048 INFO [Listener at localhost.localdomain/39919] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:57:00,050 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:57:00,050 INFO [Listener at localhost.localdomain/39919] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:57:00,050 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:57:00,051 DEBUG [Listener at localhost.localdomain/39919] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x20026108 to 127.0.0.1:56655 2023-05-24 21:57:00,052 DEBUG [Listener at localhost.localdomain/39919] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:57:00,051 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:00,052 INFO [Listener at localhost.localdomain/39919] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,34189,1684965358375' ***** 2023-05-24 21:57:00,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:57:00,052 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:57:00,052 INFO [Listener at localhost.localdomain/39919] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:57:00,053 INFO [RS:0;jenkins-hbase20:34189] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:57:00,053 INFO [RS:0;jenkins-hbase20:34189] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:57:00,053 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:57:00,053 INFO [RS:0;jenkins-hbase20:34189] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:57:00,054 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(3303): Received CLOSE for 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:57:00,055 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(3303): Received CLOSE for 2d18cb4cfbaeda921d1fc57381c52fdd 2023-05-24 21:57:00,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 4962a5e0f5751ff33c169408e36a3229, disabling compactions & flushes 2023-05-24 21:57:00,055 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:57:00,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:57:00,055 DEBUG [RS:0;jenkins-hbase20:34189] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x112e6a3e to 127.0.0.1:56655 2023-05-24 21:57:00,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:57:00,055 DEBUG [RS:0;jenkins-hbase20:34189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:57:00,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. after waiting 0 ms 2023-05-24 21:57:00,055 INFO [RS:0;jenkins-hbase20:34189] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:57:00,056 INFO [RS:0;jenkins-hbase20:34189] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:57:00,056 INFO [RS:0;jenkins-hbase20:34189] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:57:00,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:57:00,056 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:57:00,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 4962a5e0f5751ff33c169408e36a3229 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:57:00,056 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-05-24 21:57:00,056 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1478): Online Regions={4962a5e0f5751ff33c169408e36a3229=TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229., 2d18cb4cfbaeda921d1fc57381c52fdd=hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd., 1588230740=hbase:meta,,1.1588230740} 2023-05-24 21:57:00,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:57:00,058 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1504): Waiting on 1588230740, 2d18cb4cfbaeda921d1fc57381c52fdd, 4962a5e0f5751ff33c169408e36a3229 2023-05-24 21:57:00,058 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:57:00,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:57:00,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:57:00,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:57:00,058 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.61 KB 2023-05-24 21:57:00,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=22 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/baf7c142751145de85906ca1f1badbe7 2023-05-24 21:57:00,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/.tmp/info/baf7c142751145de85906ca1f1badbe7 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/baf7c142751145de85906ca1f1badbe7 2023-05-24 21:57:00,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/baf7c142751145de85906ca1f1badbe7, entries=1, sequenceid=22, filesize=5.8 K 2023-05-24 21:57:00,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 4962a5e0f5751ff33c169408e36a3229 in 32ms, sequenceid=22, compaction requested=true 2023-05-24 21:57:00,091 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9] to archive 2023-05-24 21:57:00,092 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 21:57:00,094 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6 to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/af0347b1a8d24416b6795ac4db75b2f6 2023-05-24 21:57:00,095 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9 to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/2554a4d494d24723ad97db325d3cb3a9 2023-05-24 21:57:00,096 DEBUG [StoreCloser-TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9 to hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/archive/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/info/8dee33d6675f4025ab03ce7302896ce9 2023-05-24 21:57:00,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/default/TestLogRolling-testCompactionRecordDoesntBlockRolling/4962a5e0f5751ff33c169408e36a3229/recovered.edits/25.seqid, newMaxSeqId=25, maxSeqId=1 2023-05-24 21:57:00,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 4962a5e0f5751ff33c169408e36a3229: 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testCompactionRecordDoesntBlockRolling,,1684965359516.4962a5e0f5751ff33c169408e36a3229. 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 2d18cb4cfbaeda921d1fc57381c52fdd, disabling compactions & flushes 2023-05-24 21:57:00,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. after waiting 0 ms 2023-05-24 21:57:00,106 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:57:00,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/namespace/2d18cb4cfbaeda921d1fc57381c52fdd/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 21:57:00,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:57:00,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 2d18cb4cfbaeda921d1fc57381c52fdd: 2023-05-24 21:57:00,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684965358953.2d18cb4cfbaeda921d1fc57381c52fdd. 2023-05-24 21:57:00,258 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-05-24 21:57:00,459 DEBUG [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-05-24 21:57:00,479 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.85 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/.tmp/info/db5b4fe791b548d2a62c4198df78364b 2023-05-24 21:57:00,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=264 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/.tmp/table/4cbdddc0532042438a5055fd4f771149 2023-05-24 21:57:00,508 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/.tmp/info/db5b4fe791b548d2a62c4198df78364b as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/info/db5b4fe791b548d2a62c4198df78364b 2023-05-24 21:57:00,514 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/info/db5b4fe791b548d2a62c4198df78364b, entries=20, sequenceid=14, filesize=7.6 K 2023-05-24 21:57:00,515 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/.tmp/table/4cbdddc0532042438a5055fd4f771149 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/table/4cbdddc0532042438a5055fd4f771149 2023-05-24 21:57:00,521 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/table/4cbdddc0532042438a5055fd4f771149, entries=4, sequenceid=14, filesize=4.9 K 2023-05-24 21:57:00,522 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3178, heapSize ~5.33 KB/5456, currentSize=0 B/0 for 1588230740 in 464ms, sequenceid=14, compaction requested=false 2023-05-24 21:57:00,529 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-05-24 21:57:00,529 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 21:57:00,530 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:57:00,530 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:57:00,530 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 21:57:00,651 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:57:00,659 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,34189,1684965358375; all regions closed. 2023-05-24 21:57:00,660 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:57:00,673 DEBUG [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs 2023-05-24 21:57:00,673 INFO [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C34189%2C1684965358375.meta:.meta(num 1684965358893) 2023-05-24 21:57:00,674 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/WALs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:57:00,681 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 21:57:00,681 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 21:57:00,684 DEBUG [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/oldWALs 2023-05-24 21:57:00,684 INFO [RS:0;jenkins-hbase20:34189] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C34189%2C1684965358375:(num 1684965420036) 2023-05-24 21:57:00,684 DEBUG [RS:0;jenkins-hbase20:34189] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:57:00,684 INFO [RS:0;jenkins-hbase20:34189] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:57:00,684 INFO [RS:0;jenkins-hbase20:34189] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 21:57:00,684 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:57:00,685 INFO [RS:0;jenkins-hbase20:34189] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:34189 2023-05-24 21:57:00,688 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:57:00,688 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34189,1684965358375 2023-05-24 21:57:00,688 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:57:00,689 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,34189,1684965358375] 2023-05-24 21:57:00,689 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,34189,1684965358375; numProcessing=1 2023-05-24 21:57:00,690 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,34189,1684965358375 already deleted, retry=false 2023-05-24 21:57:00,690 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,34189,1684965358375 expired; onlineServers=0 2023-05-24 21:57:00,690 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,44215,1684965358337' ***** 2023-05-24 21:57:00,690 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:57:00,691 DEBUG [M:0;jenkins-hbase20:44215] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f4f9c1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:57:00,691 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:57:00,691 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44215,1684965358337; all regions closed. 2023-05-24 21:57:00,691 DEBUG [M:0;jenkins-hbase20:44215] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:57:00,691 DEBUG [M:0;jenkins-hbase20:44215] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:57:00,691 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:57:00,691 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965358531] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965358531,5,FailOnTimeoutGroup] 2023-05-24 21:57:00,691 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965358532] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965358532,5,FailOnTimeoutGroup] 2023-05-24 21:57:00,691 DEBUG [M:0;jenkins-hbase20:44215] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:57:00,693 INFO [M:0;jenkins-hbase20:44215] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:57:00,693 INFO [M:0;jenkins-hbase20:44215] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:57:00,693 INFO [M:0;jenkins-hbase20:44215] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:57:00,693 DEBUG [M:0;jenkins-hbase20:44215] master.HMaster(1512): Stopping service threads 2023-05-24 21:57:00,693 INFO [M:0;jenkins-hbase20:44215] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:57:00,694 ERROR [M:0;jenkins-hbase20:44215] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 21:57:00,694 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:57:00,694 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:00,694 INFO [M:0;jenkins-hbase20:44215] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:57:00,694 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:57:00,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:57:00,695 DEBUG [M:0;jenkins-hbase20:44215] zookeeper.ZKUtil(398): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:57:00,695 WARN [M:0;jenkins-hbase20:44215] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:57:00,695 INFO [M:0;jenkins-hbase20:44215] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:57:00,695 INFO [M:0;jenkins-hbase20:44215] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:57:00,696 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:57:00,696 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:00,696 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:00,696 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:57:00,696 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:00,696 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=38.92 KB heapSize=47.38 KB 2023-05-24 21:57:00,710 INFO [M:0;jenkins-hbase20:44215] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.92 KB at sequenceid=100 (bloomFilter=true), to=hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd3bffc02c5947aaa9835c869c2eebd2 2023-05-24 21:57:00,716 INFO [M:0;jenkins-hbase20:44215] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd3bffc02c5947aaa9835c869c2eebd2 2023-05-24 21:57:00,717 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dd3bffc02c5947aaa9835c869c2eebd2 as hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd3bffc02c5947aaa9835c869c2eebd2 2023-05-24 21:57:00,722 INFO [M:0;jenkins-hbase20:44215] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd3bffc02c5947aaa9835c869c2eebd2 2023-05-24 21:57:00,722 INFO [M:0;jenkins-hbase20:44215] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43781/user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dd3bffc02c5947aaa9835c869c2eebd2, entries=11, sequenceid=100, filesize=6.1 K 2023-05-24 21:57:00,723 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegion(2948): Finished flush of dataSize ~38.92 KB/39854, heapSize ~47.36 KB/48496, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=100, compaction requested=false 2023-05-24 21:57:00,724 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:00,724 DEBUG [M:0;jenkins-hbase20:44215] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:57:00,724 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/30b89ab1-3e1c-871b-de42-0ee2d386c177/MasterData/WALs/jenkins-hbase20.apache.org,44215,1684965358337 2023-05-24 21:57:00,727 INFO [M:0;jenkins-hbase20:44215] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:57:00,727 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:57:00,727 INFO [M:0;jenkins-hbase20:44215] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44215 2023-05-24 21:57:00,729 DEBUG [M:0;jenkins-hbase20:44215] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,44215,1684965358337 already deleted, retry=false 2023-05-24 21:57:00,790 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:57:00,790 INFO [RS:0;jenkins-hbase20:34189] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,34189,1684965358375; zookeeper connection closed. 2023-05-24 21:57:00,790 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): regionserver:34189-0x1017f7913170001, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:57:00,790 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1ee99d3d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1ee99d3d 2023-05-24 21:57:00,791 INFO [Listener at localhost.localdomain/39919] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 21:57:00,890 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:57:00,890 DEBUG [Listener at localhost.localdomain/39919-EventThread] zookeeper.ZKWatcher(600): master:44215-0x1017f7913170000, quorum=127.0.0.1:56655, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:57:00,890 INFO [M:0;jenkins-hbase20:44215] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44215,1684965358337; zookeeper connection closed. 2023-05-24 21:57:00,892 WARN [Listener at localhost.localdomain/39919] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:57:00,902 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:57:01,008 WARN [BP-821443314-148.251.75.209-1684965357856 heartbeating to localhost.localdomain/127.0.0.1:43781] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:57:01,008 WARN [BP-821443314-148.251.75.209-1684965357856 heartbeating to localhost.localdomain/127.0.0.1:43781] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-821443314-148.251.75.209-1684965357856 (Datanode Uuid cf3187b2-1989-461f-9f0e-c3d3bec52ee6) service to localhost.localdomain/127.0.0.1:43781 2023-05-24 21:57:01,009 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/dfs/data/data3/current/BP-821443314-148.251.75.209-1684965357856] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:57:01,009 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/dfs/data/data4/current/BP-821443314-148.251.75.209-1684965357856] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:57:01,010 WARN [Listener at localhost.localdomain/39919] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:57:01,014 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:57:01,119 WARN [BP-821443314-148.251.75.209-1684965357856 heartbeating to localhost.localdomain/127.0.0.1:43781] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:57:01,119 WARN [BP-821443314-148.251.75.209-1684965357856 heartbeating to localhost.localdomain/127.0.0.1:43781] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-821443314-148.251.75.209-1684965357856 (Datanode Uuid 57025d95-4e1f-4e21-a0fc-1ef281259880) service to localhost.localdomain/127.0.0.1:43781 2023-05-24 21:57:01,119 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/dfs/data/data1/current/BP-821443314-148.251.75.209-1684965357856] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:57:01,120 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/cluster_7f2f30aa-c72f-fee1-cfb1-cf9d72208ddb/dfs/data/data2/current/BP-821443314-148.251.75.209-1684965357856] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:57:01,132 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:57:01,242 INFO [Listener at localhost.localdomain/39919] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:57:01,263 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:57:01,271 INFO [Listener at localhost.localdomain/39919] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testCompactionRecordDoesntBlockRolling Thread=92 (was 85) - Thread LEAK? -, OpenFileDescriptor=497 (was 462) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=57 (was 87), ProcessCount=170 (was 168) - ProcessCount LEAK? -, AvailableMemoryMB=9164 (was 9552) 2023-05-24 21:57:01,278 INFO [Listener at localhost.localdomain/39919] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRolling Thread=93, OpenFileDescriptor=497, MaxFileDescriptor=60000, SystemLoadAverage=57, ProcessCount=170, AvailableMemoryMB=9164 2023-05-24 21:57:01,278 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:57:01,278 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/hadoop.log.dir so I do NOT create it in target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/191629fa-ad39-65c7-235d-0becdff395a0/hadoop.tmp.dir so I do NOT create it in target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f, deleteOnExit=true 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/test.cache.data in system properties and HBase conf 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:57:01,279 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:57:01,280 DEBUG [Listener at localhost.localdomain/39919] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:57:01,280 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:57:01,281 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:57:01,282 INFO [Listener at localhost.localdomain/39919] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:57:01,283 WARN [Listener at localhost.localdomain/39919] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:57:01,284 WARN [Listener at localhost.localdomain/39919] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:57:01,285 WARN [Listener at localhost.localdomain/39919] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:57:01,309 WARN [Listener at localhost.localdomain/39919] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:57:01,311 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:57:01,317 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/java.io.tmpdir/Jetty_localhost_localdomain_33139_hdfs____x1g8up/webapp 2023-05-24 21:57:01,388 INFO [Listener at localhost.localdomain/39919] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:33139 2023-05-24 21:57:01,389 WARN [Listener at localhost.localdomain/39919] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:57:01,390 WARN [Listener at localhost.localdomain/39919] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:57:01,390 WARN [Listener at localhost.localdomain/39919] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:57:01,416 WARN [Listener at localhost.localdomain/40073] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:57:01,423 WARN [Listener at localhost.localdomain/40073] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:57:01,426 WARN [Listener at localhost.localdomain/40073] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:57:01,427 INFO [Listener at localhost.localdomain/40073] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:57:01,432 INFO [Listener at localhost.localdomain/40073] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/java.io.tmpdir/Jetty_localhost_46393_datanode____ii1a2a/webapp 2023-05-24 21:57:01,510 INFO [Listener at localhost.localdomain/40073] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46393 2023-05-24 21:57:01,515 WARN [Listener at localhost.localdomain/42691] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:57:01,525 WARN [Listener at localhost.localdomain/42691] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:57:01,528 WARN [Listener at localhost.localdomain/42691] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:57:01,530 INFO [Listener at localhost.localdomain/42691] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:57:01,534 INFO [Listener at localhost.localdomain/42691] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/java.io.tmpdir/Jetty_localhost_39501_datanode____ykrulz/webapp 2023-05-24 21:57:01,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd6c5b35796906b5: Processing first storage report for DS-59494e1e-413d-42d2-8723-3d4e7005179e from datanode 115e466f-ede8-41b1-94f9-321881efa71e 2023-05-24 21:57:01,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd6c5b35796906b5: from storage DS-59494e1e-413d-42d2-8723-3d4e7005179e node DatanodeRegistration(127.0.0.1:42061, datanodeUuid=115e466f-ede8-41b1-94f9-321881efa71e, infoPort=46811, infoSecurePort=0, ipcPort=42691, storageInfo=lv=-57;cid=testClusterID;nsid=1384428922;c=1684965421286), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:57:01,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd6c5b35796906b5: Processing first storage report for DS-37292876-8096-4014-b117-8c097749df29 from datanode 115e466f-ede8-41b1-94f9-321881efa71e 2023-05-24 21:57:01,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd6c5b35796906b5: from storage DS-37292876-8096-4014-b117-8c097749df29 node DatanodeRegistration(127.0.0.1:42061, datanodeUuid=115e466f-ede8-41b1-94f9-321881efa71e, infoPort=46811, infoSecurePort=0, ipcPort=42691, storageInfo=lv=-57;cid=testClusterID;nsid=1384428922;c=1684965421286), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:57:01,619 INFO [Listener at localhost.localdomain/42691] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39501 2023-05-24 21:57:01,626 WARN [Listener at localhost.localdomain/34377] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:57:01,687 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb41dfa78527299b4: Processing first storage report for DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1 from datanode 273b634a-ad6f-4c94-93d5-223b068eb43d 2023-05-24 21:57:01,687 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb41dfa78527299b4: from storage DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1 node DatanodeRegistration(127.0.0.1:43623, datanodeUuid=273b634a-ad6f-4c94-93d5-223b068eb43d, infoPort=34303, infoSecurePort=0, ipcPort=34377, storageInfo=lv=-57;cid=testClusterID;nsid=1384428922;c=1684965421286), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:57:01,687 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb41dfa78527299b4: Processing first storage report for DS-0d4cd402-07e3-4e18-a6fb-a0d7b7edbab7 from datanode 273b634a-ad6f-4c94-93d5-223b068eb43d 2023-05-24 21:57:01,687 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb41dfa78527299b4: from storage DS-0d4cd402-07e3-4e18-a6fb-a0d7b7edbab7 node DatanodeRegistration(127.0.0.1:43623, datanodeUuid=273b634a-ad6f-4c94-93d5-223b068eb43d, infoPort=34303, infoSecurePort=0, ipcPort=34377, storageInfo=lv=-57;cid=testClusterID;nsid=1384428922;c=1684965421286), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:57:01,734 DEBUG [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba 2023-05-24 21:57:01,738 INFO [Listener at localhost.localdomain/34377] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/zookeeper_0, clientPort=60895, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:57:01,740 INFO [Listener at localhost.localdomain/34377] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60895 2023-05-24 21:57:01,741 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,742 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,756 INFO [Listener at localhost.localdomain/34377] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d with version=8 2023-05-24 21:57:01,756 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:57:01,758 INFO [Listener at localhost.localdomain/34377] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:57:01,760 INFO [Listener at localhost.localdomain/34377] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40737 2023-05-24 21:57:01,760 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,761 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,762 INFO [Listener at localhost.localdomain/34377] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40737 connecting to ZooKeeper ensemble=127.0.0.1:60895 2023-05-24 21:57:01,766 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:407370x0, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:57:01,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40737-0x1017f7a0ad50000 connected 2023-05-24 21:57:01,779 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:57:01,779 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:57:01,780 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:57:01,781 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40737 2023-05-24 21:57:01,781 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40737 2023-05-24 21:57:01,781 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40737 2023-05-24 21:57:01,781 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40737 2023-05-24 21:57:01,782 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40737 2023-05-24 21:57:01,782 INFO [Listener at localhost.localdomain/34377] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d, hbase.cluster.distributed=false 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:57:01,792 INFO [Listener at localhost.localdomain/34377] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:57:01,793 INFO [Listener at localhost.localdomain/34377] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46117 2023-05-24 21:57:01,794 INFO [Listener at localhost.localdomain/34377] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:57:01,794 DEBUG [Listener at localhost.localdomain/34377] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:57:01,795 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,796 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,796 INFO [Listener at localhost.localdomain/34377] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46117 connecting to ZooKeeper ensemble=127.0.0.1:60895 2023-05-24 21:57:01,803 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:461170x0, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:57:01,804 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): regionserver:461170x0, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:57:01,804 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46117-0x1017f7a0ad50001 connected 2023-05-24 21:57:01,805 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:57:01,806 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ZKUtil(164): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:57:01,806 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46117 2023-05-24 21:57:01,807 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46117 2023-05-24 21:57:01,807 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46117 2023-05-24 21:57:01,807 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46117 2023-05-24 21:57:01,807 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46117 2023-05-24 21:57:01,809 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:01,825 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:57:01,826 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:01,836 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:57:01,836 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:57:01,836 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:01,838 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:57:01,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:57:01,840 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,40737,1684965421757 from backup master directory 2023-05-24 21:57:01,844 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:01,845 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:57:01,845 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:57:01,845 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:01,863 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/hbase.id with ID: 62154164-8d46-46e4-8e48-b51e3a04bd0e 2023-05-24 21:57:01,875 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:01,877 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:01,893 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x105addcb to 127.0.0.1:60895 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:57:01,897 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54f5ffde, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:57:01,897 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:57:01,951 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:57:01,953 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:57:01,958 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store-tmp 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:57:01,967 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:01,967 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:57:01,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:57:01,968 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/WALs/jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:01,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40737%2C1684965421757, suffix=, logDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/WALs/jenkins-hbase20.apache.org,40737,1684965421757, archiveDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/oldWALs, maxLogs=10 2023-05-24 21:57:01,980 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/WALs/jenkins-hbase20.apache.org,40737,1684965421757/jenkins-hbase20.apache.org%2C40737%2C1684965421757.1684965421971 2023-05-24 21:57:01,980 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43623,DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1,DISK], DatanodeInfoWithStorage[127.0.0.1:42061,DS-59494e1e-413d-42d2-8723-3d4e7005179e,DISK]] 2023-05-24 21:57:01,980 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:57:01,980 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:01,980 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,980 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,982 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,983 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:57:01,984 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:57:01,984 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:01,985 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,985 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,987 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:57:01,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:57:01,991 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=693388, jitterRate=-0.1183122992515564}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:57:01,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:57:01,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:57:01,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:57:01,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:57:01,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:57:01,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 21:57:01,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 21:57:01,994 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:57:01,994 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:57:01,995 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:57:02,005 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:57:02,005 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:57:02,006 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:57:02,006 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:57:02,007 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:57:02,008 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,009 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:57:02,009 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:57:02,010 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:57:02,011 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:57:02,011 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:57:02,011 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,011 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,40737,1684965421757, sessionid=0x1017f7a0ad50000, setting cluster-up flag (Was=false) 2023-05-24 21:57:02,014 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,016 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:57:02,017 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:02,019 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,021 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:57:02,022 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:02,023 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.hbase-snapshot/.tmp 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,026 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:57:02,027 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,028 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965452028 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:57:02,029 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,029 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:57:02,030 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:57:02,030 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:57:02,030 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:57:02,030 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:57:02,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965422031,5,FailOnTimeoutGroup] 2023-05-24 21:57:02,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965422031,5,FailOnTimeoutGroup] 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,031 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,032 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:57:02,044 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:57:02,045 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:57:02,045 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d 2023-05-24 21:57:02,051 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:02,052 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(951): ClusterId : 62154164-8d46-46e4-8e48-b51e3a04bd0e 2023-05-24 21:57:02,052 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:57:02,054 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:57:02,054 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:57:02,054 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:57:02,055 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info 2023-05-24 21:57:02,055 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:57:02,056 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:57:02,056 DEBUG [RS:0;jenkins-hbase20:46117] zookeeper.ReadOnlyZKClient(139): Connect 0x050bf2e8 to 127.0.0.1:60895 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:57:02,057 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,057 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:57:02,058 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:57:02,059 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:57:02,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:57:02,061 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/table 2023-05-24 21:57:02,062 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:57:02,062 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,063 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740 2023-05-24 21:57:02,063 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740 2023-05-24 21:57:02,065 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:57:02,066 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:57:02,068 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:57:02,069 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=835128, jitterRate=0.061920687556266785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:57:02,069 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:57:02,069 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:57:02,069 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:57:02,070 DEBUG [RS:0;jenkins-hbase20:46117] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bcc1c39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:57:02,070 DEBUG [RS:0;jenkins-hbase20:46117] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ff39998, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:57:02,070 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:57:02,070 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:57:02,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:57:02,072 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:57:02,073 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:57:02,080 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:46117 2023-05-24 21:57:02,080 INFO [RS:0;jenkins-hbase20:46117] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:57:02,080 INFO [RS:0;jenkins-hbase20:46117] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:57:02,080 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:57:02,080 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,40737,1684965421757 with isa=jenkins-hbase20.apache.org/148.251.75.209:46117, startcode=1684965421791 2023-05-24 21:57:02,081 DEBUG [RS:0;jenkins-hbase20:46117] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:57:02,084 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57455, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:57:02,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,085 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d 2023-05-24 21:57:02,085 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40073 2023-05-24 21:57:02,085 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:57:02,086 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:57:02,087 DEBUG [RS:0;jenkins-hbase20:46117] zookeeper.ZKUtil(162): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,087 WARN [RS:0;jenkins-hbase20:46117] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:57:02,087 INFO [RS:0;jenkins-hbase20:46117] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:57:02,087 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,087 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46117,1684965421791] 2023-05-24 21:57:02,091 DEBUG [RS:0;jenkins-hbase20:46117] zookeeper.ZKUtil(162): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,092 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:57:02,092 INFO [RS:0;jenkins-hbase20:46117] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:57:02,093 INFO [RS:0;jenkins-hbase20:46117] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:57:02,093 INFO [RS:0;jenkins-hbase20:46117] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:57:02,094 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,094 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:57:02,095 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,095 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,096 DEBUG [RS:0;jenkins-hbase20:46117] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:57:02,096 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,096 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,097 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,105 INFO [RS:0;jenkins-hbase20:46117] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:57:02,105 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46117,1684965421791-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,113 INFO [RS:0;jenkins-hbase20:46117] regionserver.Replication(203): jenkins-hbase20.apache.org,46117,1684965421791 started 2023-05-24 21:57:02,113 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46117,1684965421791, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46117, sessionid=0x1017f7a0ad50001 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46117,1684965421791' 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:57:02,114 DEBUG [RS:0;jenkins-hbase20:46117] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,115 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46117,1684965421791' 2023-05-24 21:57:02,115 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:57:02,115 DEBUG [RS:0;jenkins-hbase20:46117] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:57:02,115 DEBUG [RS:0;jenkins-hbase20:46117] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:57:02,115 INFO [RS:0;jenkins-hbase20:46117] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:57:02,115 INFO [RS:0;jenkins-hbase20:46117] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:57:02,219 INFO [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46117%2C1684965421791, suffix=, logDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791, archiveDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs, maxLogs=32 2023-05-24 21:57:02,223 DEBUG [jenkins-hbase20:40737] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:57:02,225 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46117,1684965421791, state=OPENING 2023-05-24 21:57:02,228 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:57:02,229 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,230 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:57:02,230 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:02,238 INFO [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965422220 2023-05-24 21:57:02,238 DEBUG [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43623,DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1,DISK], DatanodeInfoWithStorage[127.0.0.1:42061,DS-59494e1e-413d-42d2-8723-3d4e7005179e,DISK]] 2023-05-24 21:57:02,386 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,386 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:57:02,389 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45226, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:57:02,393 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:57:02,393 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:57:02,395 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46117%2C1684965421791.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791, archiveDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs, maxLogs=32 2023-05-24 21:57:02,405 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.meta.1684965422396.meta 2023-05-24 21:57:02,405 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42061,DS-59494e1e-413d-42d2-8723-3d4e7005179e,DISK], DatanodeInfoWithStorage[127.0.0.1:43623,DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1,DISK]] 2023-05-24 21:57:02,405 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:57:02,406 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:57:02,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:57:02,408 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:57:02,410 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info 2023-05-24 21:57:02,410 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info 2023-05-24 21:57:02,410 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:57:02,411 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,411 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:57:02,413 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:57:02,413 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:57:02,413 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:57:02,414 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,414 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:57:02,416 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/table 2023-05-24 21:57:02,416 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/table 2023-05-24 21:57:02,416 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:57:02,417 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,419 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740 2023-05-24 21:57:02,421 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740 2023-05-24 21:57:02,425 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:57:02,427 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:57:02,428 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=865033, jitterRate=0.09994681179523468}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:57:02,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:57:02,429 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965422386 2023-05-24 21:57:02,432 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:57:02,433 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:57:02,434 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46117,1684965421791, state=OPEN 2023-05-24 21:57:02,435 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:57:02,435 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:57:02,438 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:57:02,438 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46117,1684965421791 in 205 msec 2023-05-24 21:57:02,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:57:02,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 368 msec 2023-05-24 21:57:02,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 417 msec 2023-05-24 21:57:02,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965422443, completionTime=-1 2023-05-24 21:57:02,443 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:57:02,443 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:57:02,445 DEBUG [hconnection-0x1eac3c8c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:57:02,447 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45236, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:57:02,449 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:57:02,449 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965482449 2023-05-24 21:57:02,449 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965542449 2023-05-24 21:57:02,449 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40737,1684965421757-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40737,1684965421757-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40737,1684965421757-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:40737, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:57:02,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:57:02,456 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:57:02,456 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:57:02,457 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:57:02,458 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:57:02,460 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,460 DEBUG [HFileArchiver-9] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b empty. 2023-05-24 21:57:02,461 DEBUG [HFileArchiver-9] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,461 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:57:02,469 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:57:02,470 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 989bbe07e8b19f8d9c81b639ab087e4b, NAME => 'hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 989bbe07e8b19f8d9c81b639ab087e4b, disabling compactions & flushes 2023-05-24 21:57:02,477 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. after waiting 0 ms 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,477 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,477 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 989bbe07e8b19f8d9c81b639ab087e4b: 2023-05-24 21:57:02,480 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:57:02,481 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965422480"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965422480"}]},"ts":"1684965422480"} 2023-05-24 21:57:02,483 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:57:02,484 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:57:02,484 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965422484"}]},"ts":"1684965422484"} 2023-05-24 21:57:02,485 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:57:02,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=989bbe07e8b19f8d9c81b639ab087e4b, ASSIGN}] 2023-05-24 21:57:02,492 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=989bbe07e8b19f8d9c81b639ab087e4b, ASSIGN 2023-05-24 21:57:02,493 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=989bbe07e8b19f8d9c81b639ab087e4b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46117,1684965421791; forceNewPlan=false, retain=false 2023-05-24 21:57:02,644 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=989bbe07e8b19f8d9c81b639ab087e4b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,645 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965422644"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965422644"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965422644"}]},"ts":"1684965422644"} 2023-05-24 21:57:02,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 989bbe07e8b19f8d9c81b639ab087e4b, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:02,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 989bbe07e8b19f8d9c81b639ab087e4b, NAME => 'hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:57:02,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:02,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,807 INFO [StoreOpener-989bbe07e8b19f8d9c81b639ab087e4b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,808 DEBUG [StoreOpener-989bbe07e8b19f8d9c81b639ab087e4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/info 2023-05-24 21:57:02,808 DEBUG [StoreOpener-989bbe07e8b19f8d9c81b639ab087e4b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/info 2023-05-24 21:57:02,809 INFO [StoreOpener-989bbe07e8b19f8d9c81b639ab087e4b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 989bbe07e8b19f8d9c81b639ab087e4b columnFamilyName info 2023-05-24 21:57:02,809 INFO [StoreOpener-989bbe07e8b19f8d9c81b639ab087e4b-1] regionserver.HStore(310): Store=989bbe07e8b19f8d9c81b639ab087e4b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:02,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,812 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:57:02,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:57:02,815 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 989bbe07e8b19f8d9c81b639ab087e4b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=837983, jitterRate=0.06555092334747314}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:57:02,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 989bbe07e8b19f8d9c81b639ab087e4b: 2023-05-24 21:57:02,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b., pid=6, masterSystemTime=1684965422802 2023-05-24 21:57:02,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,820 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:57:02,820 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=989bbe07e8b19f8d9c81b639ab087e4b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:02,821 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965422820"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965422820"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965422820"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965422820"}]},"ts":"1684965422820"} 2023-05-24 21:57:02,825 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:57:02,825 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 989bbe07e8b19f8d9c81b639ab087e4b, server=jenkins-hbase20.apache.org,46117,1684965421791 in 174 msec 2023-05-24 21:57:02,827 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:57:02,827 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=989bbe07e8b19f8d9c81b639ab087e4b, ASSIGN in 336 msec 2023-05-24 21:57:02,828 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:57:02,828 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965422828"}]},"ts":"1684965422828"} 2023-05-24 21:57:02,830 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:57:02,832 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:57:02,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 377 msec 2023-05-24 21:57:02,857 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:57:02,858 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:57:02,858 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,862 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:57:02,870 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:57:02,873 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-05-24 21:57:02,884 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:57:02,893 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:57:02,897 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-05-24 21:57:02,909 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:57:02,910 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.065sec 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40737,1684965421757-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:57:02,910 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40737,1684965421757-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:57:02,912 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:57:02,956 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ReadOnlyZKClient(139): Connect 0x3150a4ee to 127.0.0.1:60895 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:57:02,960 DEBUG [Listener at localhost.localdomain/34377] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13b83b14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:57:02,962 DEBUG [hconnection-0x62ecb242-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:57:02,967 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45250, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:57:02,969 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:57:02,969 INFO [Listener at localhost.localdomain/34377] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:57:02,983 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:57:02,983 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:57:02,984 INFO [Listener at localhost.localdomain/34377] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:57:02,986 DEBUG [Listener at localhost.localdomain/34377] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-05-24 21:57:02,989 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:47876, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-05-24 21:57:02,990 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] util.TableDescriptorChecker(340): MAX_FILESIZE for table descriptor or "hbase.hregion.max.filesize" (786432) is too small, which might cause over splitting into unmanageable number of regions. 2023-05-24 21:57:02,990 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] util.TableDescriptorChecker(340): MEMSTORE_FLUSHSIZE for table descriptor or "hbase.hregion.memstore.flush.size" (8192) is too small, which might cause very frequent flushing. 2023-05-24 21:57:02,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:57:02,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestLogRolling-testLogRolling 2023-05-24 21:57:02,996 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:57:02,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] master.MasterRpcServices(697): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "TestLogRolling-testLogRolling" procId is: 9 2023-05-24 21:57:02,997 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:57:02,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:57:02,998 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:02,999 DEBUG [HFileArchiver-10] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09 empty. 2023-05-24 21:57:02,999 DEBUG [HFileArchiver-10] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:02,999 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestLogRolling-testLogRolling regions 2023-05-24 21:57:03,012 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp/data/default/TestLogRolling-testLogRolling/.tabledesc/.tableinfo.0000000001 2023-05-24 21:57:03,013 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(7675): creating {ENCODED => 738d7332a009ea44c634f4308996fd09, NAME => 'TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestLogRolling-testLogRolling', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/.tmp 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1604): Closing 738d7332a009ea44c634f4308996fd09, disabling compactions & flushes 2023-05-24 21:57:03,023 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. after waiting 0 ms 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,023 INFO [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,023 DEBUG [RegionOpenAndInit-TestLogRolling-testLogRolling-pool-0] regionserver.HRegion(1558): Region close journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:03,026 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:57:03,027 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965423027"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965423027"}]},"ts":"1684965423027"} 2023-05-24 21:57:03,028 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:57:03,030 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:57:03,030 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965423030"}]},"ts":"1684965423030"} 2023-05-24 21:57:03,032 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLING in hbase:meta 2023-05-24 21:57:03,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, ASSIGN}] 2023-05-24 21:57:03,038 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, ASSIGN 2023-05-24 21:57:03,039 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,46117,1684965421791; forceNewPlan=false, retain=false 2023-05-24 21:57:03,190 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=738d7332a009ea44c634f4308996fd09, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:03,190 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965423190"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965423190"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965423190"}]},"ts":"1684965423190"} 2023-05-24 21:57:03,193 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:03,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 738d7332a009ea44c634f4308996fd09, NAME => 'TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:57:03,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:03,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,357 INFO [StoreOpener-738d7332a009ea44c634f4308996fd09-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,359 DEBUG [StoreOpener-738d7332a009ea44c634f4308996fd09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info 2023-05-24 21:57:03,359 DEBUG [StoreOpener-738d7332a009ea44c634f4308996fd09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info 2023-05-24 21:57:03,359 INFO [StoreOpener-738d7332a009ea44c634f4308996fd09-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 738d7332a009ea44c634f4308996fd09 columnFamilyName info 2023-05-24 21:57:03,360 INFO [StoreOpener-738d7332a009ea44c634f4308996fd09-1] regionserver.HStore(310): Store=738d7332a009ea44c634f4308996fd09/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:03,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:03,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:57:03,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 738d7332a009ea44c634f4308996fd09; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=803289, jitterRate=0.021435707807540894}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:57:03,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:03,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09., pid=11, masterSystemTime=1684965423346 2023-05-24 21:57:03,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:03,374 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=738d7332a009ea44c634f4308996fd09, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:03,374 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965423374"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965423374"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965423374"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965423374"}]},"ts":"1684965423374"} 2023-05-24 21:57:03,381 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-05-24 21:57:03,381 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791 in 184 msec 2023-05-24 21:57:03,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-05-24 21:57:03,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, ASSIGN in 345 msec 2023-05-24 21:57:03,386 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:57:03,386 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestLogRolling-testLogRolling","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965423386"}]},"ts":"1684965423386"} 2023-05-24 21:57:03,388 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestLogRolling-testLogRolling, state=ENABLED in hbase:meta 2023-05-24 21:57:03,392 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestLogRolling-testLogRolling execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:57:03,394 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestLogRolling-testLogRolling in 401 msec 2023-05-24 21:57:05,664 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:57:08,092 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 21:57:08,093 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 21:57:08,093 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestLogRolling-testLogRolling' 2023-05-24 21:57:12,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40737] master.MasterRpcServices(1227): Checking to see if procedure is done pid=9 2023-05-24 21:57:13,000 INFO [Listener at localhost.localdomain/34377] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestLogRolling-testLogRolling, procId: 9 completed 2023-05-24 21:57:13,004 DEBUG [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(2627): Found 1 regions for table TestLogRolling-testLogRolling 2023-05-24 21:57:13,005 DEBUG [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(2633): firstRegionName=TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:13,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:13,024 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 738d7332a009ea44c634f4308996fd09 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:13,038 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=11 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/aaa7577232994e2881a618db5e8a3111 2023-05-24 21:57:13,048 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/aaa7577232994e2881a618db5e8a3111 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111 2023-05-24 21:57:13,054 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111, entries=7, sequenceid=11, filesize=12.1 K 2023-05-24 21:57:13,054 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 738d7332a009ea44c634f4308996fd09 in 30ms, sequenceid=11, compaction requested=false 2023-05-24 21:57:13,055 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:13,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:13,055 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 738d7332a009ea44c634f4308996fd09 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-24 21:57:13,067 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=34 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/7872b5a076f04b8e81ca072acc7b6361 2023-05-24 21:57:13,073 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/7872b5a076f04b8e81ca072acc7b6361 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361 2023-05-24 21:57:13,080 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361, entries=20, sequenceid=34, filesize=25.8 K 2023-05-24 21:57:13,081 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=5.25 KB/5380 for 738d7332a009ea44c634f4308996fd09 in 26ms, sequenceid=34, compaction requested=false 2023-05-24 21:57:13,081 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:13,081 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=37.9 K, sizeToCheck=16.0 K 2023-05-24 21:57:13,081 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:13,081 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361 because midkey is the same as first or last row 2023-05-24 21:57:15,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:15,071 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 738d7332a009ea44c634f4308996fd09 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:15,093 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=44 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/9af558964aaa4ce48c4510546b93fa18 2023-05-24 21:57:15,100 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/9af558964aaa4ce48c4510546b93fa18 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18 2023-05-24 21:57:15,106 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18, entries=7, sequenceid=44, filesize=12.1 K 2023-05-24 21:57:15,107 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=22.07 KB/22596 for 738d7332a009ea44c634f4308996fd09 in 36ms, sequenceid=44, compaction requested=true 2023-05-24 21:57:15,107 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:15,107 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=50.0 K, sizeToCheck=16.0 K 2023-05-24 21:57:15,107 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:15,107 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361 because midkey is the same as first or last row 2023-05-24 21:57:15,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:15,108 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:15,108 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:15,108 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 738d7332a009ea44c634f4308996fd09 1/1 column families, dataSize=23.12 KB heapSize=25 KB 2023-05-24 21:57:15,109 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 51218 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:15,110 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 738d7332a009ea44c634f4308996fd09/info is initiating minor compaction (all files) 2023-05-24 21:57:15,110 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 738d7332a009ea44c634f4308996fd09/info in TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:15,110 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp, totalSize=50.0 K 2023-05-24 21:57:15,111 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting aaa7577232994e2881a618db5e8a3111, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=11, earliestPutTs=1684965433010 2023-05-24 21:57:15,112 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 7872b5a076f04b8e81ca072acc7b6361, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=34, earliestPutTs=1684965433025 2023-05-24 21:57:15,112 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 9af558964aaa4ce48c4510546b93fa18, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1684965433056 2023-05-24 21:57:15,122 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 21:57:15,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] ipc.CallRunner(144): callId: 72 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:45250 deadline: 1684965445122, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:15,123 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.12 KB at sequenceid=69 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/1cc44a12a75c4f6f85d1f4230741d9ad 2023-05-24 21:57:15,126 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 738d7332a009ea44c634f4308996fd09#info#compaction#29 average throughput is 34.89 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:15,129 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/1cc44a12a75c4f6f85d1f4230741d9ad as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad 2023-05-24 21:57:15,141 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad, entries=22, sequenceid=69, filesize=27.9 K 2023-05-24 21:57:15,143 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.12 KB/23672, heapSize ~24.98 KB/25584, currentSize=7.36 KB/7532 for 738d7332a009ea44c634f4308996fd09 in 35ms, sequenceid=69, compaction requested=false 2023-05-24 21:57:15,143 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:15,143 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=77.9 K, sizeToCheck=16.0 K 2023-05-24 21:57:15,143 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:15,143 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad because midkey is the same as first or last row 2023-05-24 21:57:15,145 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/3d9b876bc4c9487dad43dbc2a517caaf as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf 2023-05-24 21:57:15,153 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 738d7332a009ea44c634f4308996fd09/info of 738d7332a009ea44c634f4308996fd09 into 3d9b876bc4c9487dad43dbc2a517caaf(size=40.7 K), total size for store is 68.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:15,153 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:15,153 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09., storeName=738d7332a009ea44c634f4308996fd09/info, priority=13, startTime=1684965435107; duration=0sec 2023-05-24 21:57:15,154 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=68.6 K, sizeToCheck=16.0 K 2023-05-24 21:57:15,154 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:15,154 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf because midkey is the same as first or last row 2023-05-24 21:57:15,154 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:25,195 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 738d7332a009ea44c634f4308996fd09 1/1 column families, dataSize=8.41 KB heapSize=9.25 KB 2023-05-24 21:57:25,210 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.41 KB at sequenceid=81 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/8583792f72884121a7699684b39753c2 2023-05-24 21:57:25,216 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/8583792f72884121a7699684b39753c2 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2 2023-05-24 21:57:25,221 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2, entries=8, sequenceid=81, filesize=13.2 K 2023-05-24 21:57:25,221 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.41 KB/8608, heapSize ~9.23 KB/9456, currentSize=0 B/0 for 738d7332a009ea44c634f4308996fd09 in 26ms, sequenceid=81, compaction requested=true 2023-05-24 21:57:25,221 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:25,222 DEBUG [MemStoreFlusher.0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=81.7 K, sizeToCheck=16.0 K 2023-05-24 21:57:25,222 DEBUG [MemStoreFlusher.0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:25,222 DEBUG [MemStoreFlusher.0] regionserver.StoreUtils(129): cannot split hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf because midkey is the same as first or last row 2023-05-24 21:57:25,222 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,222 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:25,223 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 83687 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:25,223 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 738d7332a009ea44c634f4308996fd09/info is initiating minor compaction (all files) 2023-05-24 21:57:25,223 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 738d7332a009ea44c634f4308996fd09/info in TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:25,223 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp, totalSize=81.7 K 2023-05-24 21:57:25,223 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 3d9b876bc4c9487dad43dbc2a517caaf, keycount=34, bloomtype=ROW, size=40.7 K, encoding=NONE, compression=NONE, seqNum=44, earliestPutTs=1684965433010 2023-05-24 21:57:25,224 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 1cc44a12a75c4f6f85d1f4230741d9ad, keycount=22, bloomtype=ROW, size=27.9 K, encoding=NONE, compression=NONE, seqNum=69, earliestPutTs=1684965435072 2023-05-24 21:57:25,224 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 8583792f72884121a7699684b39753c2, keycount=8, bloomtype=ROW, size=13.2 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1684965435109 2023-05-24 21:57:25,235 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 738d7332a009ea44c634f4308996fd09#info#compaction#31 average throughput is 32.84 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:25,248 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/.tmp/info/0312150fca5d4e938f2893e6247550d8 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8 2023-05-24 21:57:25,254 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 738d7332a009ea44c634f4308996fd09/info of 738d7332a009ea44c634f4308996fd09 into 0312150fca5d4e938f2893e6247550d8(size=72.5 K), total size for store is 72.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:25,254 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:25,254 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09., storeName=738d7332a009ea44c634f4308996fd09/info, priority=13, startTime=1684965445222; duration=0sec 2023-05-24 21:57:25,254 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.ConstantSizeRegionSplitPolicy(109): Should split because info size=72.5 K, sizeToCheck=16.0 K 2023-05-24 21:57:25,254 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.IncreasingToUpperBoundRegionSplitPolicy(84): regionsWithCommonTable=1 2023-05-24 21:57:25,255 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit(227): Splitting TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09., compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,255 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,256 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40737] assignment.AssignmentManager(1140): Split request from jenkins-hbase20.apache.org,46117,1684965421791, parent={ENCODED => 738d7332a009ea44c634f4308996fd09, NAME => 'TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.', STARTKEY => '', ENDKEY => ''} splitKey=row0062 2023-05-24 21:57:25,263 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40737] assignment.SplitTableRegionProcedure(219): Splittable=true state=OPEN, location=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,269 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40737] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=738d7332a009ea44c634f4308996fd09, daughterA=8e4ff1e6903edab92e16eeb232bbe277, daughterB=52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,270 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=738d7332a009ea44c634f4308996fd09, daughterA=8e4ff1e6903edab92e16eeb232bbe277, daughterB=52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,270 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=738d7332a009ea44c634f4308996fd09, daughterA=8e4ff1e6903edab92e16eeb232bbe277, daughterB=52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,270 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:SPLIT_TABLE_REGION_PREPARE; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=738d7332a009ea44c634f4308996fd09, daughterA=8e4ff1e6903edab92e16eeb232bbe277, daughterB=52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, UNASSIGN}] 2023-05-24 21:57:25,279 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, UNASSIGN 2023-05-24 21:57:25,279 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=738d7332a009ea44c634f4308996fd09, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,280 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965445279"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965445279"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965445279"}]},"ts":"1684965445279"} 2023-05-24 21:57:25,281 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; CloseRegionProcedure 738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:25,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:25,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 738d7332a009ea44c634f4308996fd09, disabling compactions & flushes 2023-05-24 21:57:25,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:25,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:25,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. after waiting 0 ms 2023-05-24 21:57:25,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:25,452 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2] to archive 2023-05-24 21:57:25,453 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 21:57:25,455 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/aaa7577232994e2881a618db5e8a3111 2023-05-24 21:57:25,456 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/7872b5a076f04b8e81ca072acc7b6361 2023-05-24 21:57:25,458 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/3d9b876bc4c9487dad43dbc2a517caaf 2023-05-24 21:57:25,459 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/9af558964aaa4ce48c4510546b93fa18 2023-05-24 21:57:25,461 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/1cc44a12a75c4f6f85d1f4230741d9ad 2023-05-24 21:57:25,462 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/8583792f72884121a7699684b39753c2 2023-05-24 21:57:25,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=1 2023-05-24 21:57:25,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. 2023-05-24 21:57:25,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 738d7332a009ea44c634f4308996fd09: 2023-05-24 21:57:25,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:25,473 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=738d7332a009ea44c634f4308996fd09, regionState=CLOSED 2023-05-24 21:57:25,473 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965445473"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965445473"}]},"ts":"1684965445473"} 2023-05-24 21:57:25,478 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-05-24 21:57:25,478 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; CloseRegionProcedure 738d7332a009ea44c634f4308996fd09, server=jenkins-hbase20.apache.org,46117,1684965421791 in 194 msec 2023-05-24 21:57:25,481 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-05-24 21:57:25,481 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=738d7332a009ea44c634f4308996fd09, UNASSIGN in 201 msec 2023-05-24 21:57:25,495 INFO [PEWorker-1] assignment.SplitTableRegionProcedure(694): pid=12 splitting 1 storefiles, region=738d7332a009ea44c634f4308996fd09, threads=1 2023-05-24 21:57:25,496 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(776): pid=12 splitting started for store file: hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8 for region: 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:25,527 DEBUG [StoreFileSplitter-pool-0] assignment.SplitTableRegionProcedure(787): pid=12 splitting complete for store file: hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8 for region: 738d7332a009ea44c634f4308996fd09 2023-05-24 21:57:25,528 DEBUG [PEWorker-1] assignment.SplitTableRegionProcedure(755): pid=12 split storefiles for region 738d7332a009ea44c634f4308996fd09 Daughter A: 1 storefiles, Daughter B: 1 storefiles. 2023-05-24 21:57:25,553 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-05-24 21:57:25,555 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/recovered.edits/85.seqid, newMaxSeqId=85, maxSeqId=-1 2023-05-24 21:57:25,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1684965445557"},{"qualifier":"splitA","vlen":70,"tag":[],"timestamp":"1684965445557"},{"qualifier":"splitB","vlen":70,"tag":[],"timestamp":"1684965445557"}]},"ts":"1684965445557"} 2023-05-24 21:57:25,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445557"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965445557"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965445557"}]},"ts":"1684965445557"} 2023-05-24 21:57:25,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445557"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965445557"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965445557"}]},"ts":"1684965445557"} 2023-05-24 21:57:25,590 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46117] regionserver.HRegion(9158): Flush requested on 1588230740 2023-05-24 21:57:25,590 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-05-24 21:57:25,591 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.82 KB heapSize=8.36 KB 2023-05-24 21:57:25,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=8e4ff1e6903edab92e16eeb232bbe277, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=52b5c6e24874c3512bf59506a4301984, ASSIGN}] 2023-05-24 21:57:25,601 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=8e4ff1e6903edab92e16eeb232bbe277, ASSIGN 2023-05-24 21:57:25,601 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=52b5c6e24874c3512bf59506a4301984, ASSIGN 2023-05-24 21:57:25,602 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.61 KB at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/info/4616fa375a074214b2a225b2209881cb 2023-05-24 21:57:25,602 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=8e4ff1e6903edab92e16eeb232bbe277, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,46117,1684965421791; forceNewPlan=false, retain=false 2023-05-24 21:57:25,602 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=52b5c6e24874c3512bf59506a4301984, ASSIGN; state=SPLITTING_NEW, location=jenkins-hbase20.apache.org,46117,1684965421791; forceNewPlan=false, retain=false 2023-05-24 21:57:25,614 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=216 B at sequenceid=17 (bloomFilter=false), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/table/9cd1b5e9cf9a4f9da0de5652503ee882 2023-05-24 21:57:25,620 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/info/4616fa375a074214b2a225b2209881cb as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info/4616fa375a074214b2a225b2209881cb 2023-05-24 21:57:25,625 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info/4616fa375a074214b2a225b2209881cb, entries=29, sequenceid=17, filesize=8.6 K 2023-05-24 21:57:25,626 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/table/9cd1b5e9cf9a4f9da0de5652503ee882 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/table/9cd1b5e9cf9a4f9da0de5652503ee882 2023-05-24 21:57:25,633 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/table/9cd1b5e9cf9a4f9da0de5652503ee882, entries=4, sequenceid=17, filesize=4.8 K 2023-05-24 21:57:25,635 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~4.82 KB/4939, heapSize ~8.08 KB/8272, currentSize=0 B/0 for 1588230740 in 43ms, sequenceid=17, compaction requested=false 2023-05-24 21:57:25,636 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 21:57:25,754 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=8e4ff1e6903edab92e16eeb232bbe277, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,754 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=52b5c6e24874c3512bf59506a4301984, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,754 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445754"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965445754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965445754"}]},"ts":"1684965445754"} 2023-05-24 21:57:25,754 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445754"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965445754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965445754"}]},"ts":"1684965445754"} 2023-05-24 21:57:25,756 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; OpenRegionProcedure 8e4ff1e6903edab92e16eeb232bbe277, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:25,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791}] 2023-05-24 21:57:25,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:25,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 52b5c6e24874c3512bf59506a4301984, NAME => 'TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.', STARTKEY => 'row0062', ENDKEY => ''} 2023-05-24 21:57:25,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:25,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,915 INFO [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,915 DEBUG [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info 2023-05-24 21:57:25,916 DEBUG [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info 2023-05-24 21:57:25,916 INFO [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 52b5c6e24874c3512bf59506a4301984 columnFamilyName info 2023-05-24 21:57:25,927 DEBUG [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-top 2023-05-24 21:57:25,927 INFO [StoreOpener-52b5c6e24874c3512bf59506a4301984-1] regionserver.HStore(310): Store=52b5c6e24874c3512bf59506a4301984/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:25,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:25,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 52b5c6e24874c3512bf59506a4301984; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=802542, jitterRate=0.020485207438468933}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:57:25,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:25,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., pid=18, masterSystemTime=1684965445909 2023-05-24 21:57:25,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,934 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-24 21:57:25,935 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:25,935 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:57:25,935 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:25,935 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-top] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=72.5 K 2023-05-24 21:57:25,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:25,936 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1684965433010 2023-05-24 21:57:25,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:25,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:57:25,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8e4ff1e6903edab92e16eeb232bbe277, NAME => 'TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.', STARTKEY => '', ENDKEY => 'row0062'} 2023-05-24 21:57:25,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestLogRolling-testLogRolling 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:57:25,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,937 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=52b5c6e24874c3512bf59506a4301984, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,937 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445937"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965445937"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965445937"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965445937"}]},"ts":"1684965445937"} 2023-05-24 21:57:25,938 INFO [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,939 DEBUG [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info 2023-05-24 21:57:25,939 DEBUG [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info 2023-05-24 21:57:25,939 INFO [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8e4ff1e6903edab92e16eeb232bbe277 columnFamilyName info 2023-05-24 21:57:25,941 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#34 average throughput is 3.08 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:25,941 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-05-24 21:57:25,941 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 in 182 msec 2023-05-24 21:57:25,942 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=52b5c6e24874c3512bf59506a4301984, ASSIGN in 341 msec 2023-05-24 21:57:25,954 DEBUG [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-bottom 2023-05-24 21:57:25,955 INFO [StoreOpener-8e4ff1e6903edab92e16eeb232bbe277-1] regionserver.HStore(310): Store=8e4ff1e6903edab92e16eeb232bbe277/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:57:25,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,960 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:57:25,960 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/c2cc445557db4ce1835c82bf6bc9f20b as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/c2cc445557db4ce1835c82bf6bc9f20b 2023-05-24 21:57:25,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8e4ff1e6903edab92e16eeb232bbe277; next sequenceid=86; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=847603, jitterRate=0.07778382301330566}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:57:25,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8e4ff1e6903edab92e16eeb232bbe277: 2023-05-24 21:57:25,962 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277., pid=17, masterSystemTime=1684965445909 2023-05-24 21:57:25,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,964 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 1 store files, 0 compacting, 1 eligible, 16 blocking 2023-05-24 21:57:25,964 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1898): Keeping/Overriding Compaction request priority to -2147482648 for CF info since it belongs to recently split daughter region TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:57:25,964 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1912): 8e4ff1e6903edab92e16eeb232bbe277/info is initiating minor compaction (all files) 2023-05-24 21:57:25,964 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 8e4ff1e6903edab92e16eeb232bbe277/info in TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:57:25,964 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-bottom] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/.tmp, totalSize=72.5 K 2023-05-24 21:57:25,965 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.Compactor(207): Compacting 0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09, keycount=32, bloomtype=ROW, size=72.5 K, encoding=NONE, compression=NONE, seqNum=81, earliestPutTs=1684965433010 2023-05-24 21:57:25,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:57:25,965 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:57:25,966 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=8e4ff1e6903edab92e16eeb232bbe277, regionState=OPEN, openSeqNum=86, regionLocation=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:25,966 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.","families":{"info":[{"qualifier":"regioninfo","vlen":70,"tag":[],"timestamp":"1684965445966"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965445966"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965445966"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965445966"}]},"ts":"1684965445966"} 2023-05-24 21:57:25,968 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into c2cc445557db4ce1835c82bf6bc9f20b(size=8.0 K), total size for store is 8.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:25,968 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:25,968 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=15, startTime=1684965445933; duration=0sec 2023-05-24 21:57:25,968 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:25,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-05-24 21:57:25,969 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; OpenRegionProcedure 8e4ff1e6903edab92e16eeb232bbe277, server=jenkins-hbase20.apache.org,46117,1684965421791 in 211 msec 2023-05-24 21:57:25,971 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] throttle.PressureAwareThroughputController(145): 8e4ff1e6903edab92e16eeb232bbe277#info#compaction#35 average throughput is 31.30 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:25,971 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-05-24 21:57:25,971 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestLogRolling-testLogRolling, region=8e4ff1e6903edab92e16eeb232bbe277, ASSIGN in 369 msec 2023-05-24 21:57:25,973 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; SplitTableRegionProcedure table=TestLogRolling-testLogRolling, parent=738d7332a009ea44c634f4308996fd09, daughterA=8e4ff1e6903edab92e16eeb232bbe277, daughterB=52b5c6e24874c3512bf59506a4301984 in 707 msec 2023-05-24 21:57:25,991 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/.tmp/info/1020d37567ea43d0925a50bfb37557e1 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/1020d37567ea43d0925a50bfb37557e1 2023-05-24 21:57:25,997 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1652): Completed compaction of 1 (all) file(s) in 8e4ff1e6903edab92e16eeb232bbe277/info of 8e4ff1e6903edab92e16eeb232bbe277 into 1020d37567ea43d0925a50bfb37557e1(size=69.1 K), total size for store is 69.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:25,998 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 8e4ff1e6903edab92e16eeb232bbe277: 2023-05-24 21:57:25,998 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277., storeName=8e4ff1e6903edab92e16eeb232bbe277/info, priority=15, startTime=1684965445962; duration=0sec 2023-05-24 21:57:25,998 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:27,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] ipc.CallRunner(144): callId: 75 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:45250 deadline: 1684965457196, exception=org.apache.hadoop.hbase.NotServingRegionException: TestLogRolling-testLogRolling,,1684965422990.738d7332a009ea44c634f4308996fd09. is not online on jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:31,031 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:57:37,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:37,315 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:37,327 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=96 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/1231e2e91ed04748a9948d835e098366 2023-05-24 21:57:37,334 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/1231e2e91ed04748a9948d835e098366 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366 2023-05-24 21:57:37,339 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366, entries=7, sequenceid=96, filesize=12.1 K 2023-05-24 21:57:37,340 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=18.91 KB/19368 for 52b5c6e24874c3512bf59506a4301984 in 25ms, sequenceid=96, compaction requested=false 2023-05-24 21:57:37,340 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:37,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:37,341 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=19.96 KB heapSize=21.63 KB 2023-05-24 21:57:37,350 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=19.96 KB at sequenceid=118 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/fb01b204a7fb46aa80749ff73bc095d4 2023-05-24 21:57:37,355 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/fb01b204a7fb46aa80749ff73bc095d4 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4 2023-05-24 21:57:37,360 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4, entries=19, sequenceid=118, filesize=24.7 K 2023-05-24 21:57:37,361 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~19.96 KB/20444, heapSize ~21.61 KB/22128, currentSize=6.30 KB/6456 for 52b5c6e24874c3512bf59506a4301984 in 20ms, sequenceid=118, compaction requested=true 2023-05-24 21:57:37,361 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:37,361 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 21:57:37,361 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:37,362 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 45892 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:37,362 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:57:37,362 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:37,362 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/c2cc445557db4ce1835c82bf6bc9f20b, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=44.8 K 2023-05-24 21:57:37,363 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting c2cc445557db4ce1835c82bf6bc9f20b, keycount=3, bloomtype=ROW, size=8.0 K, encoding=NONE, compression=NONE, seqNum=82, earliestPutTs=1684965435117 2023-05-24 21:57:37,363 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 1231e2e91ed04748a9948d835e098366, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=96, earliestPutTs=1684965457305 2023-05-24 21:57:37,363 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting fb01b204a7fb46aa80749ff73bc095d4, keycount=19, bloomtype=ROW, size=24.7 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1684965457316 2023-05-24 21:57:37,374 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#38 average throughput is 29.76 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:37,386 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/baf0e6937e714cffb00ee8171b193b2b as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/baf0e6937e714cffb00ee8171b193b2b 2023-05-24 21:57:37,393 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into baf0e6937e714cffb00ee8171b193b2b(size=35.5 K), total size for store is 35.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:37,393 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:37,393 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965457361; duration=0sec 2023-05-24 21:57:37,393 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:39,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:39,352 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:39,368 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=129 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/9c1d8e6be7dc4c22a6dc5416a6bdcded 2023-05-24 21:57:39,374 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/9c1d8e6be7dc4c22a6dc5416a6bdcded as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded 2023-05-24 21:57:39,381 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded, entries=7, sequenceid=129, filesize=12.1 K 2023-05-24 21:57:39,381 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 52b5c6e24874c3512bf59506a4301984 in 29ms, sequenceid=129, compaction requested=false 2023-05-24 21:57:39,382 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:39,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:39,382 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-24 21:57:39,393 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=149 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/3cd2fe542dd241faa48aa53b6793c60f 2023-05-24 21:57:39,398 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/3cd2fe542dd241faa48aa53b6793c60f as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f 2023-05-24 21:57:39,399 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 21:57:39,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] ipc.CallRunner(144): callId: 141 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:45250 deadline: 1684965469399, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:39,403 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f, entries=17, sequenceid=149, filesize=22.7 K 2023-05-24 21:57:39,404 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=12.61 KB/12912 for 52b5c6e24874c3512bf59506a4301984 in 22ms, sequenceid=149, compaction requested=true 2023-05-24 21:57:39,404 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:39,404 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 21:57:39,404 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:39,405 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 71922 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:39,405 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:57:39,405 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:39,405 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/baf0e6937e714cffb00ee8171b193b2b, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=70.2 K 2023-05-24 21:57:39,406 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.Compactor(207): Compacting baf0e6937e714cffb00ee8171b193b2b, keycount=29, bloomtype=ROW, size=35.5 K, encoding=NONE, compression=NONE, seqNum=118, earliestPutTs=1684965435117 2023-05-24 21:57:39,406 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.Compactor(207): Compacting 9c1d8e6be7dc4c22a6dc5416a6bdcded, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=129, earliestPutTs=1684965457342 2023-05-24 21:57:39,406 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] compactions.Compactor(207): Compacting 3cd2fe542dd241faa48aa53b6793c60f, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=149, earliestPutTs=1684965459354 2023-05-24 21:57:39,417 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#41 average throughput is 27.19 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:39,430 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/339c0d58d6b140c1b07d0959ba6976e2 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/339c0d58d6b140c1b07d0959ba6976e2 2023-05-24 21:57:39,436 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into 339c0d58d6b140c1b07d0959ba6976e2(size=60.9 K), total size for store is 60.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:39,436 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:39,436 INFO [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965459404; duration=0sec 2023-05-24 21:57:39,436 DEBUG [RS:0;jenkins-hbase20:46117-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:47,220 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-05-24 21:57:47,221 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=2, created chunk count=13, reused chunk count=32, reuseRatio=71.11% 2023-05-24 21:57:49,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:49,459 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=13.66 KB heapSize=14.88 KB 2023-05-24 21:57:49,482 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=13.66 KB at sequenceid=166 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/e60e1bace45e4746a6c7c8c528dbe87c 2023-05-24 21:57:49,491 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/e60e1bace45e4746a6c7c8c528dbe87c as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c 2023-05-24 21:57:49,525 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c, entries=13, sequenceid=166, filesize=18.4 K 2023-05-24 21:57:49,526 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~13.66 KB/13988, heapSize ~14.86 KB/15216, currentSize=1.05 KB/1076 for 52b5c6e24874c3512bf59506a4301984 in 67ms, sequenceid=166, compaction requested=false 2023-05-24 21:57:49,526 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:51,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:51,476 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:51,486 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=176 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/2b485c29017c49a793063d005ac13637 2023-05-24 21:57:51,492 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/2b485c29017c49a793063d005ac13637 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637 2023-05-24 21:57:51,497 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637, entries=7, sequenceid=176, filesize=12.1 K 2023-05-24 21:57:51,498 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=19.96 KB/20444 for 52b5c6e24874c3512bf59506a4301984 in 22ms, sequenceid=176, compaction requested=true 2023-05-24 21:57:51,498 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:51,498 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-05-24 21:57:51,498 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:51,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:51,499 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=21.02 KB heapSize=22.75 KB 2023-05-24 21:57:51,499 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 93652 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:51,499 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:57:51,499 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:51,499 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/339c0d58d6b140c1b07d0959ba6976e2, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=91.5 K 2023-05-24 21:57:51,500 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 339c0d58d6b140c1b07d0959ba6976e2, keycount=53, bloomtype=ROW, size=60.9 K, encoding=NONE, compression=NONE, seqNum=149, earliestPutTs=1684965435117 2023-05-24 21:57:51,500 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting e60e1bace45e4746a6c7c8c528dbe87c, keycount=13, bloomtype=ROW, size=18.4 K, encoding=NONE, compression=NONE, seqNum=166, earliestPutTs=1684965459382 2023-05-24 21:57:51,501 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 2b485c29017c49a793063d005ac13637, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1684965469461 2023-05-24 21:57:51,511 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.02 KB at sequenceid=199 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/3f1243df88f44bdab5aa263dc35c6dca 2023-05-24 21:57:51,517 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/3f1243df88f44bdab5aa263dc35c6dca as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca 2023-05-24 21:57:51,518 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#45 average throughput is 37.45 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:51,526 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca, entries=20, sequenceid=199, filesize=25.8 K 2023-05-24 21:57:51,527 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.02 KB/21520, heapSize ~22.73 KB/23280, currentSize=6.30 KB/6456 for 52b5c6e24874c3512bf59506a4301984 in 28ms, sequenceid=199, compaction requested=false 2023-05-24 21:57:51,527 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:51,531 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/3016bf5213d44b2ca0bf2ab51d801779 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3016bf5213d44b2ca0bf2ab51d801779 2023-05-24 21:57:51,537 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into 3016bf5213d44b2ca0bf2ab51d801779(size=82.1 K), total size for store is 107.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:51,537 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:51,537 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965471498; duration=0sec 2023-05-24 21:57:51,537 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:53,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:53,511 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:57:53,522 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=210 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/833c60b58ca940d0a9df04776db53318 2023-05-24 21:57:53,530 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/833c60b58ca940d0a9df04776db53318 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318 2023-05-24 21:57:53,537 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318, entries=7, sequenceid=210, filesize=12.1 K 2023-05-24 21:57:53,538 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=12.61 KB/12912 for 52b5c6e24874c3512bf59506a4301984 in 28ms, sequenceid=210, compaction requested=true 2023-05-24 21:57:53,538 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:53,538 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:53,538 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:57:53,540 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 122937 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:57:53,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:57:53,540 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:57:53,540 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=14.71 KB heapSize=16 KB 2023-05-24 21:57:53,540 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:57:53,540 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3016bf5213d44b2ca0bf2ab51d801779, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=120.1 K 2023-05-24 21:57:53,540 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 3016bf5213d44b2ca0bf2ab51d801779, keycount=73, bloomtype=ROW, size=82.1 K, encoding=NONE, compression=NONE, seqNum=176, earliestPutTs=1684965435117 2023-05-24 21:57:53,541 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 3f1243df88f44bdab5aa263dc35c6dca, keycount=20, bloomtype=ROW, size=25.8 K, encoding=NONE, compression=NONE, seqNum=199, earliestPutTs=1684965471476 2023-05-24 21:57:53,541 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 833c60b58ca940d0a9df04776db53318, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1684965471499 2023-05-24 21:57:53,569 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#48 average throughput is 51.31 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:57:53,569 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=14.71 KB at sequenceid=227 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/8b74655b2cda433eaac8dbaee7ee5868 2023-05-24 21:57:53,576 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/8b74655b2cda433eaac8dbaee7ee5868 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868 2023-05-24 21:57:53,576 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 21:57:53,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] ipc.CallRunner(144): callId: 207 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:45250 deadline: 1684965483576, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:57:53,581 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868, entries=14, sequenceid=227, filesize=19.5 K 2023-05-24 21:57:53,582 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~14.71 KB/15064, heapSize ~15.98 KB/16368, currentSize=15.76 KB/16140 for 52b5c6e24874c3512bf59506a4301984 in 42ms, sequenceid=227, compaction requested=false 2023-05-24 21:57:53,582 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:53,594 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/f37aa254126546c9966d35c3e25908ae as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f37aa254126546c9966d35c3e25908ae 2023-05-24 21:57:53,600 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into f37aa254126546c9966d35c3e25908ae(size=110.7 K), total size for store is 130.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:57:53,600 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:57:53,600 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965473538; duration=0sec 2023-05-24 21:57:53,600 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:57:54,131 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-05-24 21:58:03,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:03,652 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=16.81 KB heapSize=18.25 KB 2023-05-24 21:58:03,669 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=16.81 KB at sequenceid=247 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/6a5788f272974284a05aca0d8a46da2d 2023-05-24 21:58:03,677 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/6a5788f272974284a05aca0d8a46da2d as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d 2023-05-24 21:58:03,682 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d, entries=16, sequenceid=247, filesize=21.6 K 2023-05-24 21:58:03,683 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~16.81 KB/17216, heapSize ~18.23 KB/18672, currentSize=1.05 KB/1076 for 52b5c6e24874c3512bf59506a4301984 in 31ms, sequenceid=247, compaction requested=true 2023-05-24 21:58:03,683 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:03,683 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:03,683 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:58:03,685 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 155403 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:58:03,685 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:58:03,685 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:03,685 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f37aa254126546c9966d35c3e25908ae, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=151.8 K 2023-05-24 21:58:03,685 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting f37aa254126546c9966d35c3e25908ae, keycount=100, bloomtype=ROW, size=110.7 K, encoding=NONE, compression=NONE, seqNum=210, earliestPutTs=1684965435117 2023-05-24 21:58:03,686 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 8b74655b2cda433eaac8dbaee7ee5868, keycount=14, bloomtype=ROW, size=19.5 K, encoding=NONE, compression=NONE, seqNum=227, earliestPutTs=1684965473512 2023-05-24 21:58:03,686 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 6a5788f272974284a05aca0d8a46da2d, keycount=16, bloomtype=ROW, size=21.6 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1684965473540 2023-05-24 21:58:03,697 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#50 average throughput is 66.70 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:58:03,706 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/f2c0f32e77f2436fae1a4e0638fe2345 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f2c0f32e77f2436fae1a4e0638fe2345 2023-05-24 21:58:03,712 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into f2c0f32e77f2436fae1a4e0638fe2345(size=142.5 K), total size for store is 142.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:58:03,712 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:03,713 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965483683; duration=0sec 2023-05-24 21:58:03,713 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:05,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:05,673 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=7.36 KB heapSize=8.13 KB 2023-05-24 21:58:05,684 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.36 KB at sequenceid=258 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/73ee72a4a43345e7b6a05f394fbb41d9 2023-05-24 21:58:05,692 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/73ee72a4a43345e7b6a05f394fbb41d9 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9 2023-05-24 21:58:05,697 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9, entries=7, sequenceid=258, filesize=12.1 K 2023-05-24 21:58:05,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~7.36 KB/7532, heapSize ~8.11 KB/8304, currentSize=16.81 KB/17216 for 52b5c6e24874c3512bf59506a4301984 in 26ms, sequenceid=258, compaction requested=false 2023-05-24 21:58:05,699 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:05,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:05,699 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=17.86 KB heapSize=19.38 KB 2023-05-24 21:58:05,713 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=278 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/07ffa114ef9144799647b852886385ee 2023-05-24 21:58:05,719 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/07ffa114ef9144799647b852886385ee as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee 2023-05-24 21:58:05,724 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee, entries=17, sequenceid=278, filesize=22.7 K 2023-05-24 21:58:05,725 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18292, heapSize ~19.36 KB/19824, currentSize=9.46 KB/9684 for 52b5c6e24874c3512bf59506a4301984 in 26ms, sequenceid=278, compaction requested=true 2023-05-24 21:58:05,725 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:05,725 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:05,725 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:58:05,726 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 181585 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:58:05,727 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:58:05,727 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:05,727 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f2c0f32e77f2436fae1a4e0638fe2345, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=177.3 K 2023-05-24 21:58:05,727 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting f2c0f32e77f2436fae1a4e0638fe2345, keycount=130, bloomtype=ROW, size=142.5 K, encoding=NONE, compression=NONE, seqNum=247, earliestPutTs=1684965435117 2023-05-24 21:58:05,727 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 73ee72a4a43345e7b6a05f394fbb41d9, keycount=7, bloomtype=ROW, size=12.1 K, encoding=NONE, compression=NONE, seqNum=258, earliestPutTs=1684965483653 2023-05-24 21:58:05,728 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 07ffa114ef9144799647b852886385ee, keycount=17, bloomtype=ROW, size=22.7 K, encoding=NONE, compression=NONE, seqNum=278, earliestPutTs=1684965485673 2023-05-24 21:58:05,738 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#53 average throughput is 79.01 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:58:05,752 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/7d8e59779a914f68bf8acad2d6676a9d as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/7d8e59779a914f68bf8acad2d6676a9d 2023-05-24 21:58:05,757 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into 7d8e59779a914f68bf8acad2d6676a9d(size=167.9 K), total size for store is 167.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:58:05,757 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:05,757 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965485725; duration=0sec 2023-05-24 21:58:05,757 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:07,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:07,714 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=10.51 KB heapSize=11.50 KB 2023-05-24 21:58:07,730 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=10.51 KB at sequenceid=292 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/8e76e98878b94050ac2325c92458ab3e 2023-05-24 21:58:07,736 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/8e76e98878b94050ac2325c92458ab3e as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e 2023-05-24 21:58:07,743 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e, entries=10, sequenceid=292, filesize=15.3 K 2023-05-24 21:58:07,743 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~10.51 KB/10760, heapSize ~11.48 KB/11760, currentSize=17.86 KB/18292 for 52b5c6e24874c3512bf59506a4301984 in 29ms, sequenceid=292, compaction requested=false 2023-05-24 21:58:07,744 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:07,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:07,745 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=18.91 KB heapSize=20.50 KB 2023-05-24 21:58:07,755 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=18.91 KB at sequenceid=313 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/ba1bf75a49c44c5689ba0c4c21c3e287 2023-05-24 21:58:07,760 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:3155) at org.apache.hadoop.hbase.regionserver.RSRpcServices.put(RSRpcServices.java:3006) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2969) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-05-24 21:58:07,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] ipc.CallRunner(144): callId: 273 service: ClientService methodName: Mutate size: 1.2 K connection: 148.251.75.209:45250 deadline: 1684965497760, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=32.0 K, regionName=52b5c6e24874c3512bf59506a4301984, server=jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:58:07,761 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/ba1bf75a49c44c5689ba0c4c21c3e287 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287 2023-05-24 21:58:07,765 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287, entries=18, sequenceid=313, filesize=23.7 K 2023-05-24 21:58:07,766 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~18.91 KB/19368, heapSize ~20.48 KB/20976, currentSize=11.56 KB/11836 for 52b5c6e24874c3512bf59506a4301984 in 21ms, sequenceid=313, compaction requested=true 2023-05-24 21:58:07,766 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:07,766 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:07,766 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-05-24 21:58:07,767 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 211907 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-05-24 21:58:07,767 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1912): 52b5c6e24874c3512bf59506a4301984/info is initiating minor compaction (all files) 2023-05-24 21:58:07,767 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 52b5c6e24874c3512bf59506a4301984/info in TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:07,767 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/7d8e59779a914f68bf8acad2d6676a9d, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287] into tmpdir=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp, totalSize=206.9 K 2023-05-24 21:58:07,768 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 7d8e59779a914f68bf8acad2d6676a9d, keycount=154, bloomtype=ROW, size=167.9 K, encoding=NONE, compression=NONE, seqNum=278, earliestPutTs=1684965435117 2023-05-24 21:58:07,768 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting 8e76e98878b94050ac2325c92458ab3e, keycount=10, bloomtype=ROW, size=15.3 K, encoding=NONE, compression=NONE, seqNum=292, earliestPutTs=1684965485700 2023-05-24 21:58:07,768 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] compactions.Compactor(207): Compacting ba1bf75a49c44c5689ba0c4c21c3e287, keycount=18, bloomtype=ROW, size=23.7 K, encoding=NONE, compression=NONE, seqNum=313, earliestPutTs=1684965487716 2023-05-24 21:58:07,778 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] throttle.PressureAwareThroughputController(145): 52b5c6e24874c3512bf59506a4301984#info#compaction#56 average throughput is 93.38 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-05-24 21:58:07,796 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/ece9d754fffa44a49fa25152799b5bc8 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ece9d754fffa44a49fa25152799b5bc8 2023-05-24 21:58:07,801 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 52b5c6e24874c3512bf59506a4301984/info of 52b5c6e24874c3512bf59506a4301984 into ece9d754fffa44a49fa25152799b5bc8(size=197.5 K), total size for store is 197.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-05-24 21:58:07,801 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:07,801 INFO [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984., storeName=52b5c6e24874c3512bf59506a4301984/info, priority=13, startTime=1684965487766; duration=0sec 2023-05-24 21:58:07,801 DEBUG [RS:0;jenkins-hbase20:46117-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-05-24 21:58:17,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46117] regionserver.HRegion(9158): Flush requested on 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:17,798 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=12.61 KB heapSize=13.75 KB 2023-05-24 21:58:17,808 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.61 KB at sequenceid=329 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/fc684345769a4cffa9d797d97be1fdc2 2023-05-24 21:58:17,815 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/fc684345769a4cffa9d797d97be1fdc2 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fc684345769a4cffa9d797d97be1fdc2 2023-05-24 21:58:17,822 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fc684345769a4cffa9d797d97be1fdc2, entries=12, sequenceid=329, filesize=17.4 K 2023-05-24 21:58:17,823 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~12.61 KB/12912, heapSize ~13.73 KB/14064, currentSize=1.05 KB/1076 for 52b5c6e24874c3512bf59506a4301984 in 25ms, sequenceid=329, compaction requested=false 2023-05-24 21:58:17,823 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:19,800 INFO [Listener at localhost.localdomain/34377] wal.AbstractTestLogRolling(188): after writing there are 0 log files 2023-05-24 21:58:19,833 INFO [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965422220 with entries=314, filesize=308.38 KB; new WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499800 2023-05-24 21:58:19,833 DEBUG [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43623,DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1,DISK], DatanodeInfoWithStorage[127.0.0.1:42061,DS-59494e1e-413d-42d2-8723-3d4e7005179e,DISK]] 2023-05-24 21:58:19,834 DEBUG [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965422220 is not closed yet, will try archiving it next time 2023-05-24 21:58:19,841 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegion(2446): Flush status journal for 8e4ff1e6903edab92e16eeb232bbe277: 2023-05-24 21:58:19,841 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2745): Flushing 989bbe07e8b19f8d9c81b639ab087e4b 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:58:19,850 INFO [Listener at localhost.localdomain/34377] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/.tmp/info/2b105ed1ed3049d090478ba8df312423 2023-05-24 21:58:19,855 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/.tmp/info/2b105ed1ed3049d090478ba8df312423 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/info/2b105ed1ed3049d090478ba8df312423 2023-05-24 21:58:19,861 INFO [Listener at localhost.localdomain/34377] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/info/2b105ed1ed3049d090478ba8df312423, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 21:58:19,862 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 989bbe07e8b19f8d9c81b639ab087e4b in 21ms, sequenceid=6, compaction requested=false 2023-05-24 21:58:19,863 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegion(2446): Flush status journal for 989bbe07e8b19f8d9c81b639ab087e4b: 2023-05-24 21:58:19,864 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.26 KB heapSize=4.19 KB 2023-05-24 21:58:19,872 INFO [Listener at localhost.localdomain/34377] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.26 KB at sequenceid=24 (bloomFilter=false), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/info/752e5405886c431697bf08886216e209 2023-05-24 21:58:19,877 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/.tmp/info/752e5405886c431697bf08886216e209 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info/752e5405886c431697bf08886216e209 2023-05-24 21:58:19,882 INFO [Listener at localhost.localdomain/34377] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/info/752e5405886c431697bf08886216e209, entries=16, sequenceid=24, filesize=7.0 K 2023-05-24 21:58:19,883 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2948): Finished flush of dataSize ~2.26 KB/2316, heapSize ~3.67 KB/3760, currentSize=0 B/0 for 1588230740 in 20ms, sequenceid=24, compaction requested=false 2023-05-24 21:58:19,884 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-05-24 21:58:19,884 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2745): Flushing 52b5c6e24874c3512bf59506a4301984 1/1 column families, dataSize=1.05 KB heapSize=1.38 KB 2023-05-24 21:58:19,896 INFO [Listener at localhost.localdomain/34377] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.05 KB at sequenceid=333 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/155c2ccbda6c4a44ad893c66bbe58438 2023-05-24 21:58:19,901 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/.tmp/info/155c2ccbda6c4a44ad893c66bbe58438 as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/155c2ccbda6c4a44ad893c66bbe58438 2023-05-24 21:58:19,905 INFO [Listener at localhost.localdomain/34377] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/155c2ccbda6c4a44ad893c66bbe58438, entries=1, sequenceid=333, filesize=5.8 K 2023-05-24 21:58:19,906 INFO [Listener at localhost.localdomain/34377] regionserver.HRegion(2948): Finished flush of dataSize ~1.05 KB/1076, heapSize ~1.36 KB/1392, currentSize=0 B/0 for 52b5c6e24874c3512bf59506a4301984 in 22ms, sequenceid=333, compaction requested=true 2023-05-24 21:58:19,906 DEBUG [Listener at localhost.localdomain/34377] regionserver.HRegion(2446): Flush status journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:19,952 INFO [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499800 with entries=4, filesize=1.22 KB; new WAL /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499907 2023-05-24 21:58:19,952 DEBUG [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42061,DS-59494e1e-413d-42d2-8723-3d4e7005179e,DISK], DatanodeInfoWithStorage[127.0.0.1:43623,DS-fd7e26a9-9326-434d-b9a5-5f5218da89c1,DISK]] 2023-05-24 21:58:19,952 DEBUG [Listener at localhost.localdomain/34377] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499800 is not closed yet, will try archiving it next time 2023-05-24 21:58:19,952 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965422220 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965422220 2023-05-24 21:58:19,953 INFO [Listener at localhost.localdomain/34377] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-05-24 21:58:19,955 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499800 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs/jenkins-hbase20.apache.org%2C46117%2C1684965421791.1684965499800 2023-05-24 21:58:20,054 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:58:20,054 INFO [Listener at localhost.localdomain/34377] client.ConnectionImplementation(1974): Closing master protocol: MasterService 2023-05-24 21:58:20,054 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3150a4ee to 127.0.0.1:60895 2023-05-24 21:58:20,055 DEBUG [Listener at localhost.localdomain/34377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:20,055 DEBUG [Listener at localhost.localdomain/34377] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:58:20,055 DEBUG [Listener at localhost.localdomain/34377] util.JVMClusterUtil(257): Found active master hash=828710495, stopped=false 2023-05-24 21:58:20,056 INFO [Listener at localhost.localdomain/34377] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:58:20,059 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:20,059 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:20,059 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:20,059 INFO [Listener at localhost.localdomain/34377] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:58:20,060 DEBUG [Listener at localhost.localdomain/34377] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x105addcb to 127.0.0.1:60895 2023-05-24 21:58:20,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:20,061 DEBUG [Listener at localhost.localdomain/34377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:20,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:20,062 INFO [Listener at localhost.localdomain/34377] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,46117,1684965421791' ***** 2023-05-24 21:58:20,062 INFO [Listener at localhost.localdomain/34377] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:58:20,062 INFO [RS:0;jenkins-hbase20:46117] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:58:20,062 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:58:20,062 INFO [RS:0;jenkins-hbase20:46117] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:58:20,063 INFO [RS:0;jenkins-hbase20:46117] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:58:20,063 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(3303): Received CLOSE for 8e4ff1e6903edab92e16eeb232bbe277 2023-05-24 21:58:20,063 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(3303): Received CLOSE for 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:58:20,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8e4ff1e6903edab92e16eeb232bbe277, disabling compactions & flushes 2023-05-24 21:58:20,063 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(3303): Received CLOSE for 52b5c6e24874c3512bf59506a4301984 2023-05-24 21:58:20,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:58:20,063 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:58:20,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:58:20,063 DEBUG [RS:0;jenkins-hbase20:46117] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x050bf2e8 to 127.0.0.1:60895 2023-05-24 21:58:20,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. after waiting 0 ms 2023-05-24 21:58:20,063 DEBUG [RS:0;jenkins-hbase20:46117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:20,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:58:20,064 INFO [RS:0;jenkins-hbase20:46117] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:58:20,064 INFO [RS:0;jenkins-hbase20:46117] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:58:20,064 INFO [RS:0;jenkins-hbase20:46117] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:58:20,064 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:58:20,066 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-05-24 21:58:20,066 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1478): Online Regions={8e4ff1e6903edab92e16eeb232bbe277=TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277., 989bbe07e8b19f8d9c81b639ab087e4b=hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b., 1588230740=hbase:meta,,1.1588230740, 52b5c6e24874c3512bf59506a4301984=TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.} 2023-05-24 21:58:20,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:58:20,066 DEBUG [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1504): Waiting on 1588230740, 52b5c6e24874c3512bf59506a4301984, 8e4ff1e6903edab92e16eeb232bbe277, 989bbe07e8b19f8d9c81b639ab087e4b 2023-05-24 21:58:20,066 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:58:20,066 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-bottom] to archive 2023-05-24 21:58:20,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:58:20,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:58:20,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:58:20,069 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 21:58:20,073 DEBUG [StoreCloser-TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09 2023-05-24 21:58:20,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/meta/1588230740/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-05-24 21:58:20,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 21:58:20,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:58:20,078 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:58:20,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 21:58:20,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/8e4ff1e6903edab92e16eeb232bbe277/recovered.edits/90.seqid, newMaxSeqId=90, maxSeqId=85 2023-05-24 21:58:20,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:58:20,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8e4ff1e6903edab92e16eeb232bbe277: 2023-05-24 21:58:20,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,,1684965445264.8e4ff1e6903edab92e16eeb232bbe277. 2023-05-24 21:58:20,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 989bbe07e8b19f8d9c81b639ab087e4b, disabling compactions & flushes 2023-05-24 21:58:20,087 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:58:20,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:58:20,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. after waiting 0 ms 2023-05-24 21:58:20,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:58:20,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/hbase/namespace/989bbe07e8b19f8d9c81b639ab087e4b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 21:58:20,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:58:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 989bbe07e8b19f8d9c81b639ab087e4b: 2023-05-24 21:58:20,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684965422454.989bbe07e8b19f8d9c81b639ab087e4b. 2023-05-24 21:58:20,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 52b5c6e24874c3512bf59506a4301984, disabling compactions & flushes 2023-05-24 21:58:20,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:20,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:20,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. after waiting 0 ms 2023-05-24 21:58:20,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:20,102 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:58:20,110 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09->hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/738d7332a009ea44c634f4308996fd09/info/0312150fca5d4e938f2893e6247550d8-top, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/c2cc445557db4ce1835c82bf6bc9f20b, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/baf0e6937e714cffb00ee8171b193b2b, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/339c0d58d6b140c1b07d0959ba6976e2, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3016bf5213d44b2ca0bf2ab51d801779, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f37aa254126546c9966d35c3e25908ae, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f2c0f32e77f2436fae1a4e0638fe2345, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/7d8e59779a914f68bf8acad2d6676a9d, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287] to archive 2023-05-24 21:58:20,111 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-05-24 21:58:20,112 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 21:58:20,113 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 21:58:20,114 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/0312150fca5d4e938f2893e6247550d8.738d7332a009ea44c634f4308996fd09 2023-05-24 21:58:20,115 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/c2cc445557db4ce1835c82bf6bc9f20b to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/c2cc445557db4ce1835c82bf6bc9f20b 2023-05-24 21:58:20,116 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/1231e2e91ed04748a9948d835e098366 2023-05-24 21:58:20,117 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/baf0e6937e714cffb00ee8171b193b2b to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/baf0e6937e714cffb00ee8171b193b2b 2023-05-24 21:58:20,119 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/fb01b204a7fb46aa80749ff73bc095d4 2023-05-24 21:58:20,120 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/9c1d8e6be7dc4c22a6dc5416a6bdcded 2023-05-24 21:58:20,121 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/339c0d58d6b140c1b07d0959ba6976e2 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/339c0d58d6b140c1b07d0959ba6976e2 2023-05-24 21:58:20,122 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3cd2fe542dd241faa48aa53b6793c60f 2023-05-24 21:58:20,123 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/e60e1bace45e4746a6c7c8c528dbe87c 2023-05-24 21:58:20,124 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3016bf5213d44b2ca0bf2ab51d801779 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3016bf5213d44b2ca0bf2ab51d801779 2023-05-24 21:58:20,125 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/2b485c29017c49a793063d005ac13637 2023-05-24 21:58:20,126 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/3f1243df88f44bdab5aa263dc35c6dca 2023-05-24 21:58:20,127 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f37aa254126546c9966d35c3e25908ae to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f37aa254126546c9966d35c3e25908ae 2023-05-24 21:58:20,128 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/833c60b58ca940d0a9df04776db53318 2023-05-24 21:58:20,129 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8b74655b2cda433eaac8dbaee7ee5868 2023-05-24 21:58:20,130 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f2c0f32e77f2436fae1a4e0638fe2345 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/f2c0f32e77f2436fae1a4e0638fe2345 2023-05-24 21:58:20,131 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/6a5788f272974284a05aca0d8a46da2d 2023-05-24 21:58:20,132 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/73ee72a4a43345e7b6a05f394fbb41d9 2023-05-24 21:58:20,133 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/7d8e59779a914f68bf8acad2d6676a9d to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/7d8e59779a914f68bf8acad2d6676a9d 2023-05-24 21:58:20,134 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/07ffa114ef9144799647b852886385ee 2023-05-24 21:58:20,135 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/8e76e98878b94050ac2325c92458ab3e 2023-05-24 21:58:20,136 DEBUG [StoreCloser-TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287 to hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/archive/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/info/ba1bf75a49c44c5689ba0c4c21c3e287 2023-05-24 21:58:20,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/data/default/TestLogRolling-testLogRolling/52b5c6e24874c3512bf59506a4301984/recovered.edits/336.seqid, newMaxSeqId=336, maxSeqId=85 2023-05-24 21:58:20,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:20,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 52b5c6e24874c3512bf59506a4301984: 2023-05-24 21:58:20,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed TestLogRolling-testLogRolling,row0062,1684965445264.52b5c6e24874c3512bf59506a4301984. 2023-05-24 21:58:20,266 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46117,1684965421791; all regions closed. 2023-05-24 21:58:20,267 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:58:20,271 DEBUG [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs 2023-05-24 21:58:20,271 INFO [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C46117%2C1684965421791.meta:.meta(num 1684965422396) 2023-05-24 21:58:20,271 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/WALs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:58:20,277 DEBUG [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/oldWALs 2023-05-24 21:58:20,277 INFO [RS:0;jenkins-hbase20:46117] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C46117%2C1684965421791:(num 1684965499907) 2023-05-24 21:58:20,277 DEBUG [RS:0;jenkins-hbase20:46117] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:20,277 INFO [RS:0;jenkins-hbase20:46117] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:58:20,277 INFO [RS:0;jenkins-hbase20:46117] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 21:58:20,277 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:58:20,278 INFO [RS:0;jenkins-hbase20:46117] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46117 2023-05-24 21:58:20,280 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46117,1684965421791 2023-05-24 21:58:20,280 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:58:20,280 ERROR [Listener at localhost.localdomain/34377-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@35f7eafb rejected from java.util.concurrent.ThreadPoolExecutor@31c8689d[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-05-24 21:58:20,280 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:58:20,281 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46117,1684965421791] 2023-05-24 21:58:20,281 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46117,1684965421791; numProcessing=1 2023-05-24 21:58:20,282 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46117,1684965421791 already deleted, retry=false 2023-05-24 21:58:20,282 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46117,1684965421791 expired; onlineServers=0 2023-05-24 21:58:20,282 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,40737,1684965421757' ***** 2023-05-24 21:58:20,282 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:58:20,282 DEBUG [M:0;jenkins-hbase20:40737] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@355a9cab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:58:20,282 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:58:20,282 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40737,1684965421757; all regions closed. 2023-05-24 21:58:20,282 DEBUG [M:0;jenkins-hbase20:40737] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:20,282 DEBUG [M:0;jenkins-hbase20:40737] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:58:20,282 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:58:20,282 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965422031] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965422031,5,FailOnTimeoutGroup] 2023-05-24 21:58:20,282 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965422031] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965422031,5,FailOnTimeoutGroup] 2023-05-24 21:58:20,282 DEBUG [M:0;jenkins-hbase20:40737] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:58:20,284 INFO [M:0;jenkins-hbase20:40737] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:58:20,284 INFO [M:0;jenkins-hbase20:40737] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:58:20,284 INFO [M:0;jenkins-hbase20:40737] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:58:20,284 DEBUG [M:0;jenkins-hbase20:40737] master.HMaster(1512): Stopping service threads 2023-05-24 21:58:20,284 INFO [M:0;jenkins-hbase20:40737] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:58:20,285 ERROR [M:0;jenkins-hbase20:40737] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-05-24 21:58:20,285 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:58:20,285 INFO [M:0;jenkins-hbase20:40737] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:58:20,285 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:20,285 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:58:20,285 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:58:20,285 DEBUG [M:0;jenkins-hbase20:40737] zookeeper.ZKUtil(398): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:58:20,285 WARN [M:0;jenkins-hbase20:40737] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:58:20,285 INFO [M:0;jenkins-hbase20:40737] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:58:20,286 INFO [M:0;jenkins-hbase20:40737] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:58:20,286 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:58:20,286 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:20,286 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:20,286 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:58:20,286 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:20,286 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=64.78 KB heapSize=78.52 KB 2023-05-24 21:58:20,297 INFO [M:0;jenkins-hbase20:40737] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=64.78 KB at sequenceid=160 (bloomFilter=true), to=hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2244aa0f6b9c43a99e051e8b030193ce 2023-05-24 21:58:20,305 INFO [M:0;jenkins-hbase20:40737] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2244aa0f6b9c43a99e051e8b030193ce 2023-05-24 21:58:20,307 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2244aa0f6b9c43a99e051e8b030193ce as hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2244aa0f6b9c43a99e051e8b030193ce 2023-05-24 21:58:20,315 INFO [M:0;jenkins-hbase20:40737] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2244aa0f6b9c43a99e051e8b030193ce 2023-05-24 21:58:20,315 INFO [M:0;jenkins-hbase20:40737] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40073/user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2244aa0f6b9c43a99e051e8b030193ce, entries=18, sequenceid=160, filesize=6.9 K 2023-05-24 21:58:20,317 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegion(2948): Finished flush of dataSize ~64.78 KB/66332, heapSize ~78.51 KB/80392, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 30ms, sequenceid=160, compaction requested=false 2023-05-24 21:58:20,318 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:20,318 DEBUG [M:0;jenkins-hbase20:40737] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:58:20,319 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/728fcdcb-8068-1738-aab9-9bf7e405946d/MasterData/WALs/jenkins-hbase20.apache.org,40737,1684965421757 2023-05-24 21:58:20,323 INFO [M:0;jenkins-hbase20:40737] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:58:20,323 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:58:20,324 INFO [M:0;jenkins-hbase20:40737] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40737 2023-05-24 21:58:20,326 DEBUG [M:0;jenkins-hbase20:40737] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,40737,1684965421757 already deleted, retry=false 2023-05-24 21:58:20,381 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:20,381 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): regionserver:46117-0x1017f7a0ad50001, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:20,381 INFO [RS:0;jenkins-hbase20:46117] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46117,1684965421791; zookeeper connection closed. 2023-05-24 21:58:20,383 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@24a39fea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@24a39fea 2023-05-24 21:58:20,383 INFO [Listener at localhost.localdomain/34377] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 21:58:20,481 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:20,481 DEBUG [Listener at localhost.localdomain/34377-EventThread] zookeeper.ZKWatcher(600): master:40737-0x1017f7a0ad50000, quorum=127.0.0.1:60895, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:20,481 INFO [M:0;jenkins-hbase20:40737] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40737,1684965421757; zookeeper connection closed. 2023-05-24 21:58:20,482 WARN [Listener at localhost.localdomain/34377] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:58:20,487 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:58:20,596 WARN [BP-1648895742-148.251.75.209-1684965421286 heartbeating to localhost.localdomain/127.0.0.1:40073] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:58:20,596 WARN [BP-1648895742-148.251.75.209-1684965421286 heartbeating to localhost.localdomain/127.0.0.1:40073] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1648895742-148.251.75.209-1684965421286 (Datanode Uuid 273b634a-ad6f-4c94-93d5-223b068eb43d) service to localhost.localdomain/127.0.0.1:40073 2023-05-24 21:58:20,597 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/dfs/data/data3/current/BP-1648895742-148.251.75.209-1684965421286] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:20,598 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/dfs/data/data4/current/BP-1648895742-148.251.75.209-1684965421286] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:20,600 WARN [Listener at localhost.localdomain/34377] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:58:20,605 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:58:20,716 WARN [BP-1648895742-148.251.75.209-1684965421286 heartbeating to localhost.localdomain/127.0.0.1:40073] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:58:20,716 WARN [BP-1648895742-148.251.75.209-1684965421286 heartbeating to localhost.localdomain/127.0.0.1:40073] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1648895742-148.251.75.209-1684965421286 (Datanode Uuid 115e466f-ede8-41b1-94f9-321881efa71e) service to localhost.localdomain/127.0.0.1:40073 2023-05-24 21:58:20,717 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/dfs/data/data1/current/BP-1648895742-148.251.75.209-1684965421286] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:20,718 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/cluster_53cde3de-f0d4-b03a-9646-461b0d70004f/dfs/data/data2/current/BP-1648895742-148.251.75.209-1684965421286] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:20,739 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:58:20,863 INFO [Listener at localhost.localdomain/34377] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:58:20,893 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:58:20,902 INFO [Listener at localhost.localdomain/34377] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRolling Thread=105 (was 93) - Thread LEAK? -, OpenFileDescriptor=532 (was 497) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=75 (was 57) - SystemLoadAverage LEAK? -, ProcessCount=168 (was 170), AvailableMemoryMB=8743 (was 9164) 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.ResourceChecker(147): before: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=105, OpenFileDescriptor=532, MaxFileDescriptor=60000, SystemLoadAverage=75, ProcessCount=168, AvailableMemoryMB=8743 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=1, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/hadoop.log.dir so I do NOT create it in target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/cda394ee-ed6e-eb02-1972-ae9afaf1bdba/hadoop.tmp.dir so I do NOT create it in target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943, deleteOnExit=true 2023-05-24 21:58:20,912 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/test.cache.data in system properties and HBase conf 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/hadoop.tmp.dir in system properties and HBase conf 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/hadoop.log.dir in system properties and HBase conf 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/mapreduce.cluster.local.dir in system properties and HBase conf 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-05-24 21:58:20,913 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-05-24 21:58:20,913 DEBUG [Listener at localhost.localdomain/34377] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-05-24 21:58:20,914 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/nfs.dump.dir in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/java.io.tmpdir in system properties and HBase conf 2023-05-24 21:58:20,915 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/dfs.journalnode.edits.dir in system properties and HBase conf 2023-05-24 21:58:20,916 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-05-24 21:58:20,916 INFO [Listener at localhost.localdomain/34377] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-05-24 21:58:20,918 WARN [Listener at localhost.localdomain/34377] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:58:20,920 WARN [Listener at localhost.localdomain/34377] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:58:20,920 WARN [Listener at localhost.localdomain/34377] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:58:20,945 WARN [Listener at localhost.localdomain/34377] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:58:20,947 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:58:20,953 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/java.io.tmpdir/Jetty_localhost_localdomain_42687_hdfs____.7z1cbz/webapp 2023-05-24 21:58:21,025 INFO [Listener at localhost.localdomain/34377] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42687 2023-05-24 21:58:21,026 WARN [Listener at localhost.localdomain/34377] blockmanagement.DatanodeManager(362): The given interval for marking stale datanode = 30000, which is larger than heartbeat expire interval 20000. 2023-05-24 21:58:21,027 WARN [Listener at localhost.localdomain/34377] conf.Configuration(1701): No unit for dfs.heartbeat.interval(1) assuming SECONDS 2023-05-24 21:58:21,027 WARN [Listener at localhost.localdomain/34377] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-05-24 21:58:21,052 WARN [Listener at localhost.localdomain/37293] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:58:21,065 WARN [Listener at localhost.localdomain/37293] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:58:21,067 WARN [Listener at localhost.localdomain/37293] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:58:21,068 INFO [Listener at localhost.localdomain/37293] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:58:21,072 INFO [Listener at localhost.localdomain/37293] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/java.io.tmpdir/Jetty_localhost_36943_datanode____.pkp84k/webapp 2023-05-24 21:58:21,143 INFO [Listener at localhost.localdomain/37293] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36943 2023-05-24 21:58:21,150 WARN [Listener at localhost.localdomain/44227] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:58:21,167 WARN [Listener at localhost.localdomain/44227] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-05-24 21:58:21,169 WARN [Listener at localhost.localdomain/44227] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-05-24 21:58:21,170 INFO [Listener at localhost.localdomain/44227] log.Slf4jLog(67): jetty-6.1.26 2023-05-24 21:58:21,185 INFO [Listener at localhost.localdomain/44227] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/java.io.tmpdir/Jetty_localhost_42365_datanode____.3dmcyt/webapp 2023-05-24 21:58:21,245 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x937ff1163523be9f: Processing first storage report for DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba from datanode 008a5dc0-b0bd-4c86-8129-3cfc56a2c028 2023-05-24 21:58:21,245 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x937ff1163523be9f: from storage DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba node DatanodeRegistration(127.0.0.1:45775, datanodeUuid=008a5dc0-b0bd-4c86-8129-3cfc56a2c028, infoPort=36099, infoSecurePort=0, ipcPort=44227, storageInfo=lv=-57;cid=testClusterID;nsid=959709338;c=1684965500921), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:58:21,245 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x937ff1163523be9f: Processing first storage report for DS-806b03f4-0fe5-4cec-aa77-f3910e94d867 from datanode 008a5dc0-b0bd-4c86-8129-3cfc56a2c028 2023-05-24 21:58:21,245 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x937ff1163523be9f: from storage DS-806b03f4-0fe5-4cec-aa77-f3910e94d867 node DatanodeRegistration(127.0.0.1:45775, datanodeUuid=008a5dc0-b0bd-4c86-8129-3cfc56a2c028, infoPort=36099, infoSecurePort=0, ipcPort=44227, storageInfo=lv=-57;cid=testClusterID;nsid=959709338;c=1684965500921), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:58:21,265 INFO [Listener at localhost.localdomain/44227] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42365 2023-05-24 21:58:21,272 WARN [Listener at localhost.localdomain/40439] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-05-24 21:58:21,364 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa165f9324504fbc6: Processing first storage report for DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74 from datanode be29db46-f4c8-4d28-938d-a5e192b30500 2023-05-24 21:58:21,364 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa165f9324504fbc6: from storage DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74 node DatanodeRegistration(127.0.0.1:33523, datanodeUuid=be29db46-f4c8-4d28-938d-a5e192b30500, infoPort=40907, infoSecurePort=0, ipcPort=40439, storageInfo=lv=-57;cid=testClusterID;nsid=959709338;c=1684965500921), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:58:21,364 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa165f9324504fbc6: Processing first storage report for DS-9297f29b-a4d4-42bc-bc5d-d6998b84ec58 from datanode be29db46-f4c8-4d28-938d-a5e192b30500 2023-05-24 21:58:21,364 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa165f9324504fbc6: from storage DS-9297f29b-a4d4-42bc-bc5d-d6998b84ec58 node DatanodeRegistration(127.0.0.1:33523, datanodeUuid=be29db46-f4c8-4d28-938d-a5e192b30500, infoPort=40907, infoSecurePort=0, ipcPort=40439, storageInfo=lv=-57;cid=testClusterID;nsid=959709338;c=1684965500921), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-05-24 21:58:21,378 DEBUG [Listener at localhost.localdomain/40439] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48 2023-05-24 21:58:21,381 INFO [Listener at localhost.localdomain/40439] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/zookeeper_0, clientPort=60543, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-05-24 21:58:21,382 INFO [Listener at localhost.localdomain/40439] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60543 2023-05-24 21:58:21,382 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,383 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,398 INFO [Listener at localhost.localdomain/40439] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04 with version=8 2023-05-24 21:58:21,399 INFO [Listener at localhost.localdomain/40439] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34243/user/jenkins/test-data/78a739d9-0dcc-6771-3599-32cee6adcdb3/hbase-staging 2023-05-24 21:58:21,400 INFO [Listener at localhost.localdomain/40439] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:58:21,400 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,401 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,401 INFO [Listener at localhost.localdomain/40439] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:58:21,401 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,401 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:58:21,401 INFO [Listener at localhost.localdomain/40439] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:58:21,402 INFO [Listener at localhost.localdomain/40439] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43601 2023-05-24 21:58:21,403 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,403 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,404 INFO [Listener at localhost.localdomain/40439] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43601 connecting to ZooKeeper ensemble=127.0.0.1:60543 2023-05-24 21:58:21,409 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:436010x0, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:58:21,410 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43601-0x1017f7b41f00000 connected 2023-05-24 21:58:21,422 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:58:21,422 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:21,423 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:58:21,424 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43601 2023-05-24 21:58:21,424 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43601 2023-05-24 21:58:21,424 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43601 2023-05-24 21:58:21,425 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43601 2023-05-24 21:58:21,425 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43601 2023-05-24 21:58:21,425 INFO [Listener at localhost.localdomain/40439] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04, hbase.cluster.distributed=false 2023-05-24 21:58:21,441 INFO [Listener at localhost.localdomain/40439] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-05-24 21:58:21,441 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,442 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,442 INFO [Listener at localhost.localdomain/40439] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-05-24 21:58:21,442 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-05-24 21:58:21,442 INFO [Listener at localhost.localdomain/40439] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-05-24 21:58:21,442 INFO [Listener at localhost.localdomain/40439] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-05-24 21:58:21,443 INFO [Listener at localhost.localdomain/40439] ipc.NettyRpcServer(120): Bind to /148.251.75.209:45319 2023-05-24 21:58:21,444 INFO [Listener at localhost.localdomain/40439] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-05-24 21:58:21,445 DEBUG [Listener at localhost.localdomain/40439] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-05-24 21:58:21,445 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,446 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,447 INFO [Listener at localhost.localdomain/40439] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45319 connecting to ZooKeeper ensemble=127.0.0.1:60543 2023-05-24 21:58:21,450 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:453190x0, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-05-24 21:58:21,451 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): regionserver:453190x0, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:58:21,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45319-0x1017f7b41f00001 connected 2023-05-24 21:58:21,451 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:21,452 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ZKUtil(164): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-05-24 21:58:21,452 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45319 2023-05-24 21:58:21,452 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45319 2023-05-24 21:58:21,453 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45319 2023-05-24 21:58:21,453 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45319 2023-05-24 21:58:21,453 DEBUG [Listener at localhost.localdomain/40439] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45319 2023-05-24 21:58:21,454 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,455 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:58:21,455 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,456 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:58:21,456 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-05-24 21:58:21,456 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,456 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:58:21,457 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,43601,1684965501400 from backup master directory 2023-05-24 21:58:21,457 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-05-24 21:58:21,457 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,458 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:58:21,458 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-05-24 21:58:21,458 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,468 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/hbase.id with ID: ce9df930-ff7d-483a-9d3b-9a4be64f6ff4 2023-05-24 21:58:21,477 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:21,480 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,488 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x497e2199 to 127.0.0.1:60543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:58:21,492 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4bb71d03, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:58:21,492 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-05-24 21:58:21,493 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-05-24 21:58:21,493 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:58:21,495 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store-tmp 2023-05-24 21:58:21,504 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:21,504 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:58:21,504 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:21,504 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:21,504 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:58:21,504 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:21,505 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:21,505 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:58:21,505 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/WALs/jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,508 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43601%2C1684965501400, suffix=, logDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/WALs/jenkins-hbase20.apache.org,43601,1684965501400, archiveDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/oldWALs, maxLogs=10 2023-05-24 21:58:21,514 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/WALs/jenkins-hbase20.apache.org,43601,1684965501400/jenkins-hbase20.apache.org%2C43601%2C1684965501400.1684965501509 2023-05-24 21:58:21,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45775,DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba,DISK], DatanodeInfoWithStorage[127.0.0.1:33523,DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74,DISK]] 2023-05-24 21:58:21,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:58:21,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:21,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,515 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,518 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,520 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-05-24 21:58:21,520 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-05-24 21:58:21,521 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,522 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,523 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,527 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-05-24 21:58:21,530 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:58:21,531 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=756474, jitterRate=-0.03809387981891632}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:58:21,531 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:58:21,533 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-05-24 21:58:21,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-05-24 21:58:21,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-05-24 21:58:21,535 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-05-24 21:58:21,536 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-05-24 21:58:21,536 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-05-24 21:58:21,536 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-05-24 21:58:21,538 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-05-24 21:58:21,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-05-24 21:58:21,549 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-05-24 21:58:21,549 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-05-24 21:58:21,549 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-05-24 21:58:21,549 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-05-24 21:58:21,550 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-05-24 21:58:21,552 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,552 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-05-24 21:58:21,553 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-05-24 21:58:21,553 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-05-24 21:58:21,554 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:21,554 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:21,554 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,556 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,43601,1684965501400, sessionid=0x1017f7b41f00000, setting cluster-up flag (Was=false) 2023-05-24 21:58:21,558 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,561 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-05-24 21:58:21,561 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,563 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,565 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-05-24 21:58:21,566 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:21,566 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.hbase-snapshot/.tmp 2023-05-24 21:58:21,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-05-24 21:58:21,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:58:21,568 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:58:21,569 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,570 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1684965531570 2023-05-24 21:58:21,570 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-05-24 21:58:21,570 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,571 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-05-24 21:58:21,571 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-05-24 21:58:21,571 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-05-24 21:58:21,572 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965501572,5,FailOnTimeoutGroup] 2023-05-24 21:58:21,572 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965501572,5,FailOnTimeoutGroup] 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,572 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,572 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:58:21,580 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:58:21,581 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-05-24 21:58:21,581 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04 2023-05-24 21:58:21,587 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:21,588 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:58:21,589 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/info 2023-05-24 21:58:21,589 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:58:21,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,590 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:58:21,591 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:58:21,591 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:58:21,592 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,592 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:58:21,593 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/table 2023-05-24 21:58:21,593 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:58:21,594 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,594 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740 2023-05-24 21:58:21,595 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740 2023-05-24 21:58:21,596 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:58:21,597 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:58:21,598 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:58:21,599 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=854713, jitterRate=0.08682428300380707}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:58:21,599 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:58:21,599 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:58:21,599 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:58:21,599 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:58:21,599 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:58:21,599 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:58:21,599 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:58:21,600 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:58:21,600 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-05-24 21:58:21,600 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-05-24 21:58:21,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-05-24 21:58:21,602 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-05-24 21:58:21,603 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-05-24 21:58:21,656 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(951): ClusterId : ce9df930-ff7d-483a-9d3b-9a4be64f6ff4 2023-05-24 21:58:21,658 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-05-24 21:58:21,662 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-05-24 21:58:21,663 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-05-24 21:58:21,667 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-05-24 21:58:21,668 DEBUG [RS:0;jenkins-hbase20:45319] zookeeper.ReadOnlyZKClient(139): Connect 0x3ccd6f15 to 127.0.0.1:60543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:58:21,681 DEBUG [RS:0;jenkins-hbase20:45319] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@320a86fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:58:21,681 DEBUG [RS:0;jenkins-hbase20:45319] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36350465, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:58:21,691 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:45319 2023-05-24 21:58:21,691 INFO [RS:0;jenkins-hbase20:45319] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-05-24 21:58:21,691 INFO [RS:0;jenkins-hbase20:45319] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-05-24 21:58:21,691 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1022): About to register with Master. 2023-05-24 21:58:21,692 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(2809): reportForDuty to master=jenkins-hbase20.apache.org,43601,1684965501400 with isa=jenkins-hbase20.apache.org/148.251.75.209:45319, startcode=1684965501441 2023-05-24 21:58:21,692 DEBUG [RS:0;jenkins-hbase20:45319] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-05-24 21:58:21,695 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39989, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-05-24 21:58:21,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43601] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,696 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04 2023-05-24 21:58:21,696 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:37293 2023-05-24 21:58:21,696 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=-1 2023-05-24 21:58:21,697 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:58:21,698 DEBUG [RS:0;jenkins-hbase20:45319] zookeeper.ZKUtil(162): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,698 WARN [RS:0;jenkins-hbase20:45319] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-05-24 21:58:21,698 INFO [RS:0;jenkins-hbase20:45319] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:58:21,698 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,45319,1684965501441] 2023-05-24 21:58:21,698 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1946): logDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,705 DEBUG [RS:0;jenkins-hbase20:45319] zookeeper.ZKUtil(162): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,706 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-05-24 21:58:21,706 INFO [RS:0;jenkins-hbase20:45319] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-05-24 21:58:21,708 INFO [RS:0;jenkins-hbase20:45319] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-05-24 21:58:21,708 INFO [RS:0;jenkins-hbase20:45319] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-05-24 21:58:21,708 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,708 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-05-24 21:58:21,709 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,710 DEBUG [RS:0;jenkins-hbase20:45319] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-05-24 21:58:21,711 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,711 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,711 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,724 INFO [RS:0;jenkins-hbase20:45319] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-05-24 21:58:21,724 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,45319,1684965501441-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,733 INFO [RS:0;jenkins-hbase20:45319] regionserver.Replication(203): jenkins-hbase20.apache.org,45319,1684965501441 started 2023-05-24 21:58:21,733 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,45319,1684965501441, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:45319, sessionid=0x1017f7b41f00001 2023-05-24 21:58:21,733 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-05-24 21:58:21,733 DEBUG [RS:0;jenkins-hbase20:45319] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,733 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45319,1684965501441' 2023-05-24 21:58:21,733 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-05-24 21:58:21,733 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,45319,1684965501441' 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-05-24 21:58:21,734 DEBUG [RS:0;jenkins-hbase20:45319] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-05-24 21:58:21,734 INFO [RS:0;jenkins-hbase20:45319] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-05-24 21:58:21,734 INFO [RS:0;jenkins-hbase20:45319] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-05-24 21:58:21,753 DEBUG [jenkins-hbase20:43601] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=1, allServersCount=1 2023-05-24 21:58:21,754 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45319,1684965501441, state=OPENING 2023-05-24 21:58:21,755 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-05-24 21:58:21,756 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:21,756 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45319,1684965501441}] 2023-05-24 21:58:21,756 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:58:21,839 INFO [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45319%2C1684965501441, suffix=, logDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441, archiveDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs, maxLogs=32 2023-05-24 21:58:21,855 INFO [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441/jenkins-hbase20.apache.org%2C45319%2C1684965501441.1684965501841 2023-05-24 21:58:21,855 DEBUG [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45775,DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba,DISK], DatanodeInfoWithStorage[127.0.0.1:33523,DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74,DISK]] 2023-05-24 21:58:21,910 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:21,911 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-05-24 21:58:21,913 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39816, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-05-24 21:58:21,918 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-05-24 21:58:21,919 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:58:21,921 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C45319%2C1684965501441.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441, archiveDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs, maxLogs=32 2023-05-24 21:58:21,930 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441/jenkins-hbase20.apache.org%2C45319%2C1684965501441.meta.1684965501922.meta 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45775,DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba,DISK], DatanodeInfoWithStorage[127.0.0.1:33523,DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74,DISK]] 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-05-24 21:58:21,931 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:21,931 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-05-24 21:58:21,932 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-05-24 21:58:21,933 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-05-24 21:58:21,934 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/info 2023-05-24 21:58:21,934 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/info 2023-05-24 21:58:21,934 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-05-24 21:58:21,935 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,935 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-05-24 21:58:21,936 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:58:21,936 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/rep_barrier 2023-05-24 21:58:21,937 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-05-24 21:58:21,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,937 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-05-24 21:58:21,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/table 2023-05-24 21:58:21,938 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/table 2023-05-24 21:58:21,939 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-05-24 21:58:21,939 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:21,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740 2023-05-24 21:58:21,941 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740 2023-05-24 21:58:21,944 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-05-24 21:58:21,947 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-05-24 21:58:21,948 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=839686, jitterRate=0.06771713495254517}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-05-24 21:58:21,948 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-05-24 21:58:21,953 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1684965501910 2023-05-24 21:58:21,958 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:meta,,1.1588230740 2023-05-24 21:58:21,959 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-05-24 21:58:21,959 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,45319,1684965501441, state=OPEN 2023-05-24 21:58:21,960 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-05-24 21:58:21,960 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-05-24 21:58:21,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-05-24 21:58:21,962 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,45319,1684965501441 in 204 msec 2023-05-24 21:58:21,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-05-24 21:58:21,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-05-24 21:58:21,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 398 msec 2023-05-24 21:58:21,966 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1684965501966, completionTime=-1 2023-05-24 21:58:21,966 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=1; waited=0ms, expected min=1 server(s), max=1 server(s), master is running 2023-05-24 21:58:21,966 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-05-24 21:58:21,968 DEBUG [hconnection-0x5824ff61-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:58:21,970 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:58:21,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=1 2023-05-24 21:58:21,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1684965561971 2023-05-24 21:58:21,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1684965621971 2023-05-24 21:58:21,971 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1684965501400-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1684965501400-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1684965501400-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:43601, period=300000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-05-24 21:58:21,976 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-05-24 21:58:21,977 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-05-24 21:58:21,977 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-05-24 21:58:21,978 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-05-24 21:58:21,979 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-05-24 21:58:21,980 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.tmp/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:21,981 DEBUG [HFileArchiver-11] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.tmp/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1 empty. 2023-05-24 21:58:21,981 DEBUG [HFileArchiver-11] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.tmp/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:21,981 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-05-24 21:58:21,992 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-05-24 21:58:21,993 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 03d043d18f6c946fdc179dc8daedb8c1, NAME => 'hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/.tmp 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 03d043d18f6c946fdc179dc8daedb8c1, disabling compactions & flushes 2023-05-24 21:58:21,998 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. after waiting 0 ms 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:21,998 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:21,998 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 03d043d18f6c946fdc179dc8daedb8c1: 2023-05-24 21:58:22,000 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-05-24 21:58:22,001 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965502001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1684965502001"}]},"ts":"1684965502001"} 2023-05-24 21:58:22,003 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-05-24 21:58:22,004 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-05-24 21:58:22,004 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965502004"}]},"ts":"1684965502004"} 2023-05-24 21:58:22,006 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-05-24 21:58:22,012 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=03d043d18f6c946fdc179dc8daedb8c1, ASSIGN}] 2023-05-24 21:58:22,014 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=03d043d18f6c946fdc179dc8daedb8c1, ASSIGN 2023-05-24 21:58:22,015 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=03d043d18f6c946fdc179dc8daedb8c1, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,45319,1684965501441; forceNewPlan=false, retain=false 2023-05-24 21:58:22,166 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=03d043d18f6c946fdc179dc8daedb8c1, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,166 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965502166"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1684965502166"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1684965502166"}]},"ts":"1684965502166"} 2023-05-24 21:58:22,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 03d043d18f6c946fdc179dc8daedb8c1, server=jenkins-hbase20.apache.org,45319,1684965501441}] 2023-05-24 21:58:22,328 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 03d043d18f6c946fdc179dc8daedb8c1, NAME => 'hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.', STARTKEY => '', ENDKEY => ''} 2023-05-24 21:58:22,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-05-24 21:58:22,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,332 INFO [StoreOpener-03d043d18f6c946fdc179dc8daedb8c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,334 DEBUG [StoreOpener-03d043d18f6c946fdc179dc8daedb8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/info 2023-05-24 21:58:22,334 DEBUG [StoreOpener-03d043d18f6c946fdc179dc8daedb8c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/info 2023-05-24 21:58:22,334 INFO [StoreOpener-03d043d18f6c946fdc179dc8daedb8c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 03d043d18f6c946fdc179dc8daedb8c1 columnFamilyName info 2023-05-24 21:58:22,334 INFO [StoreOpener-03d043d18f6c946fdc179dc8daedb8c1-1] regionserver.HStore(310): Store=03d043d18f6c946fdc179dc8daedb8c1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-05-24 21:58:22,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,338 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-05-24 21:58:22,340 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 03d043d18f6c946fdc179dc8daedb8c1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=16384, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=786585, jitterRate=1.958012580871582E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-05-24 21:58:22,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 03d043d18f6c946fdc179dc8daedb8c1: 2023-05-24 21:58:22,341 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2334): Post open deploy tasks for hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1., pid=6, masterSystemTime=1684965502320 2023-05-24 21:58:22,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2361): Finished post open deploy task for hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,344 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,344 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=03d043d18f6c946fdc179dc8daedb8c1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,344 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1684965502344"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1684965502344"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1684965502344"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1684965502344"}]},"ts":"1684965502344"} 2023-05-24 21:58:22,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-05-24 21:58:22,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 03d043d18f6c946fdc179dc8daedb8c1, server=jenkins-hbase20.apache.org,45319,1684965501441 in 177 msec 2023-05-24 21:58:22,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-05-24 21:58:22,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=03d043d18f6c946fdc179dc8daedb8c1, ASSIGN in 338 msec 2023-05-24 21:58:22,350 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-05-24 21:58:22,350 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1684965502350"}]},"ts":"1684965502350"} 2023-05-24 21:58:22,351 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-05-24 21:58:22,353 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-05-24 21:58:22,355 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 377 msec 2023-05-24 21:58:22,378 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-05-24 21:58:22,379 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:58:22,379 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:22,383 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-05-24 21:58:22,397 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:58:22,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 19 msec 2023-05-24 21:58:22,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-05-24 21:58:22,413 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-05-24 21:58:22,418 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-05-24 21:58:22,431 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-05-24 21:58:22,433 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 0.975sec 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1684965501400-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-05-24 21:58:22,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43601,1684965501400-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-05-24 21:58:22,434 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-05-24 21:58:22,456 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ReadOnlyZKClient(139): Connect 0x05a30713 to 127.0.0.1:60543 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-05-24 21:58:22,460 DEBUG [Listener at localhost.localdomain/40439] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7eda0664, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-05-24 21:58:22,461 DEBUG [hconnection-0x398a0c90-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-05-24 21:58:22,463 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-05-24 21:58:22,464 INFO [Listener at localhost.localdomain/40439] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:22,464 INFO [Listener at localhost.localdomain/40439] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-05-24 21:58:22,480 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-05-24 21:58:22,480 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:22,481 INFO [Listener at localhost.localdomain/40439] master.MasterRpcServices(492): Client=null/null set balanceSwitch=false 2023-05-24 21:58:22,482 INFO [Listener at localhost.localdomain/40439] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.FSHLogProvider 2023-05-24 21:58:22,484 INFO [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=test.com%2C8080%2C1, suffix=, logDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1, archiveDir=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs, maxLogs=32 2023-05-24 21:58:22,491 INFO [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1/test.com%2C8080%2C1.1684965502484 2023-05-24 21:58:22,491 DEBUG [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45775,DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba,DISK], DatanodeInfoWithStorage[127.0.0.1:33523,DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74,DISK]] 2023-05-24 21:58:22,499 INFO [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(802): Rolled WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1/test.com%2C8080%2C1.1684965502484 with entries=0, filesize=83 B; new WAL /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1/test.com%2C8080%2C1.1684965502491 2023-05-24 21:58:22,502 DEBUG [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(887): Create new FSHLog writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45775,DS-4a7d1ed7-2730-4449-9135-4b5832ba95ba,DISK], DatanodeInfoWithStorage[127.0.0.1:33523,DS-17fc4d26-0ea1-4e42-8c3b-c76c734fda74,DISK]] 2023-05-24 21:58:22,502 DEBUG [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(716): hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1/test.com%2C8080%2C1.1684965502484 is not closed yet, will try archiving it next time 2023-05-24 21:58:22,503 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1 2023-05-24 21:58:22,510 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/test.com,8080,1/test.com%2C8080%2C1.1684965502484 to hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs/test.com%2C8080%2C1.1684965502484 2023-05-24 21:58:22,512 DEBUG [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs 2023-05-24 21:58:22,512 INFO [Listener at localhost.localdomain/40439] wal.AbstractFSWAL(1031): Closed WAL: FSHLog test.com%2C8080%2C1:(num 1684965502491) 2023-05-24 21:58:22,512 INFO [Listener at localhost.localdomain/40439] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-05-24 21:58:22,512 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x05a30713 to 127.0.0.1:60543 2023-05-24 21:58:22,512 DEBUG [Listener at localhost.localdomain/40439] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:22,513 DEBUG [Listener at localhost.localdomain/40439] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-05-24 21:58:22,513 DEBUG [Listener at localhost.localdomain/40439] util.JVMClusterUtil(257): Found active master hash=1272683514, stopped=false 2023-05-24 21:58:22,513 INFO [Listener at localhost.localdomain/40439] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:22,514 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:22,514 INFO [Listener at localhost.localdomain/40439] procedure2.ProcedureExecutor(629): Stopping 2023-05-24 21:58:22,514 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-05-24 21:58:22,515 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:22,515 DEBUG [Listener at localhost.localdomain/40439] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x497e2199 to 127.0.0.1:60543 2023-05-24 21:58:22,515 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:22,515 DEBUG [Listener at localhost.localdomain/40439] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:22,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-05-24 21:58:22,516 INFO [Listener at localhost.localdomain/40439] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,45319,1684965501441' ***** 2023-05-24 21:58:22,516 INFO [Listener at localhost.localdomain/40439] regionserver.HRegionServer(2309): STOPPED: Shutdown requested 2023-05-24 21:58:22,516 INFO [RS:0;jenkins-hbase20:45319] regionserver.HeapMemoryManager(220): Stopping 2023-05-24 21:58:22,516 INFO [RS:0;jenkins-hbase20:45319] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-05-24 21:58:22,516 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-05-24 21:58:22,516 INFO [RS:0;jenkins-hbase20:45319] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(3303): Received CLOSE for 03d043d18f6c946fdc179dc8daedb8c1 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,517 DEBUG [RS:0;jenkins-hbase20:45319] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3ccd6f15 to 127.0.0.1:60543 2023-05-24 21:58:22,517 DEBUG [RS:0;jenkins-hbase20:45319] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:22,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 03d043d18f6c946fdc179dc8daedb8c1, disabling compactions & flushes 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-05-24 21:58:22,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-05-24 21:58:22,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-05-24 21:58:22,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. after waiting 0 ms 2023-05-24 21:58:22,517 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(3303): Received CLOSE for 1588230740 2023-05-24 21:58:22,517 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 03d043d18f6c946fdc179dc8daedb8c1 1/1 column families, dataSize=78 B heapSize=488 B 2023-05-24 21:58:22,518 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-05-24 21:58:22,518 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1478): Online Regions={03d043d18f6c946fdc179dc8daedb8c1=hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1., 1588230740=hbase:meta,,1.1588230740} 2023-05-24 21:58:22,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-05-24 21:58:22,518 DEBUG [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1504): Waiting on 03d043d18f6c946fdc179dc8daedb8c1, 1588230740 2023-05-24 21:58:22,518 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-05-24 21:58:22,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-05-24 21:58:22,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-05-24 21:58:22,518 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-05-24 21:58:22,518 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-05-24 21:58:22,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/.tmp/info/5f24829956e44813b134350f42ba5299 2023-05-24 21:58:22,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/.tmp/info/c9606534850f4770a777053f2a416a22 2023-05-24 21:58:22,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/.tmp/info/5f24829956e44813b134350f42ba5299 as hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/info/5f24829956e44813b134350f42ba5299 2023-05-24 21:58:22,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/info/5f24829956e44813b134350f42ba5299, entries=2, sequenceid=6, filesize=4.8 K 2023-05-24 21:58:22,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 03d043d18f6c946fdc179dc8daedb8c1 in 28ms, sequenceid=6, compaction requested=false 2023-05-24 21:58:22,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-05-24 21:58:22,550 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/.tmp/table/e7d678a9309e4fbd80750e117c6d33e3 2023-05-24 21:58:22,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/namespace/03d043d18f6c946fdc179dc8daedb8c1/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-05-24 21:58:22,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,552 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 03d043d18f6c946fdc179dc8daedb8c1: 2023-05-24 21:58:22,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1684965501976.03d043d18f6c946fdc179dc8daedb8c1. 2023-05-24 21:58:22,558 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/.tmp/info/c9606534850f4770a777053f2a416a22 as hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/info/c9606534850f4770a777053f2a416a22 2023-05-24 21:58:22,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/info/c9606534850f4770a777053f2a416a22, entries=10, sequenceid=9, filesize=5.9 K 2023-05-24 21:58:22,565 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/.tmp/table/e7d678a9309e4fbd80750e117c6d33e3 as hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/table/e7d678a9309e4fbd80750e117c6d33e3 2023-05-24 21:58:22,569 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/table/e7d678a9309e4fbd80750e117c6d33e3, entries=2, sequenceid=9, filesize=4.7 K 2023-05-24 21:58:22,570 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 52ms, sequenceid=9, compaction requested=false 2023-05-24 21:58:22,570 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-05-24 21:58:22,577 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-05-24 21:58:22,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-05-24 21:58:22,578 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-05-24 21:58:22,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-05-24 21:58:22,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-05-24 21:58:22,712 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-05-24 21:58:22,712 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-05-24 21:58:22,718 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,45319,1684965501441; all regions closed. 2023-05-24 21:58:22,719 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,724 DEBUG [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs 2023-05-24 21:58:22,724 INFO [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45319%2C1684965501441.meta:.meta(num 1684965501922) 2023-05-24 21:58:22,725 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/WALs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,730 DEBUG [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/oldWALs 2023-05-24 21:58:22,730 INFO [RS:0;jenkins-hbase20:45319] wal.AbstractFSWAL(1031): Closed WAL: FSHLog jenkins-hbase20.apache.org%2C45319%2C1684965501441:(num 1684965501841) 2023-05-24 21:58:22,730 DEBUG [RS:0;jenkins-hbase20:45319] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:22,730 INFO [RS:0;jenkins-hbase20:45319] regionserver.LeaseManager(133): Closed leases 2023-05-24 21:58:22,730 INFO [RS:0;jenkins-hbase20:45319] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-05-24 21:58:22,731 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:58:22,731 INFO [RS:0;jenkins-hbase20:45319] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:45319 2023-05-24 21:58:22,733 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:58:22,733 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,45319,1684965501441 2023-05-24 21:58:22,733 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-05-24 21:58:22,734 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,45319,1684965501441] 2023-05-24 21:58:22,734 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,45319,1684965501441; numProcessing=1 2023-05-24 21:58:22,734 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,45319,1684965501441 already deleted, retry=false 2023-05-24 21:58:22,734 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,45319,1684965501441 expired; onlineServers=0 2023-05-24 21:58:22,734 INFO [RegionServerTracker-0] regionserver.HRegionServer(2295): ***** STOPPING region server 'jenkins-hbase20.apache.org,43601,1684965501400' ***** 2023-05-24 21:58:22,734 INFO [RegionServerTracker-0] regionserver.HRegionServer(2309): STOPPED: Cluster shutdown set; onlineServer=0 2023-05-24 21:58:22,735 DEBUG [M:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fe40659, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-05-24 21:58:22,735 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:22,735 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43601,1684965501400; all regions closed. 2023-05-24 21:58:22,735 DEBUG [M:0;jenkins-hbase20:43601] ipc.AbstractRpcClient(494): Stopping rpc client 2023-05-24 21:58:22,735 DEBUG [M:0;jenkins-hbase20:43601] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-05-24 21:58:22,735 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-05-24 21:58:22,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965501572] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1684965501572,5,FailOnTimeoutGroup] 2023-05-24 21:58:22,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965501572] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1684965501572,5,FailOnTimeoutGroup] 2023-05-24 21:58:22,735 DEBUG [M:0;jenkins-hbase20:43601] cleaner.HFileCleaner(317): Stopping file delete threads 2023-05-24 21:58:22,736 INFO [M:0;jenkins-hbase20:43601] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-05-24 21:58:22,736 INFO [M:0;jenkins-hbase20:43601] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-05-24 21:58:22,736 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-05-24 21:58:22,736 INFO [M:0;jenkins-hbase20:43601] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-05-24 21:58:22,736 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-05-24 21:58:22,737 DEBUG [M:0;jenkins-hbase20:43601] master.HMaster(1512): Stopping service threads 2023-05-24 21:58:22,737 INFO [M:0;jenkins-hbase20:43601] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-05-24 21:58:22,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-05-24 21:58:22,737 ERROR [M:0;jenkins-hbase20:43601] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-11,5,PEWorkerGroup] 2023-05-24 21:58:22,737 INFO [M:0;jenkins-hbase20:43601] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-05-24 21:58:22,737 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-05-24 21:58:22,737 DEBUG [M:0;jenkins-hbase20:43601] zookeeper.ZKUtil(398): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-05-24 21:58:22,737 WARN [M:0;jenkins-hbase20:43601] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-05-24 21:58:22,737 INFO [M:0;jenkins-hbase20:43601] assignment.AssignmentManager(315): Stopping assignment manager 2023-05-24 21:58:22,738 INFO [M:0;jenkins-hbase20:43601] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-05-24 21:58:22,738 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-05-24 21:58:22,738 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:22,738 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:22,738 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-05-24 21:58:22,738 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:22,738 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-05-24 21:58:22,747 INFO [M:0;jenkins-hbase20:43601] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a7ff4eeaf0a34fb8b58085cedc80bdbd 2023-05-24 21:58:22,752 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a7ff4eeaf0a34fb8b58085cedc80bdbd as hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a7ff4eeaf0a34fb8b58085cedc80bdbd 2023-05-24 21:58:22,757 INFO [M:0;jenkins-hbase20:43601] regionserver.HStore(1080): Added hdfs://localhost.localdomain:37293/user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a7ff4eeaf0a34fb8b58085cedc80bdbd, entries=8, sequenceid=66, filesize=6.3 K 2023-05-24 21:58:22,759 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegion(2948): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=66, compaction requested=false 2023-05-24 21:58:22,761 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-05-24 21:58:22,761 DEBUG [M:0;jenkins-hbase20:43601] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-05-24 21:58:22,762 DEBUG [WAL-Shutdown-0] wal.FSHLog(489): Closing WAL writer in /user/jenkins/test-data/c60af815-1db4-d562-506e-c9147f232b04/MasterData/WALs/jenkins-hbase20.apache.org,43601,1684965501400 2023-05-24 21:58:22,765 INFO [M:0;jenkins-hbase20:43601] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-05-24 21:58:22,765 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-05-24 21:58:22,766 INFO [M:0;jenkins-hbase20:43601] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43601 2023-05-24 21:58:22,769 DEBUG [M:0;jenkins-hbase20:43601] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,43601,1684965501400 already deleted, retry=false 2023-05-24 21:58:22,917 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:22,917 INFO [M:0;jenkins-hbase20:43601] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43601,1684965501400; zookeeper connection closed. 2023-05-24 21:58:22,917 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): master:43601-0x1017f7b41f00000, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:23,018 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:23,018 INFO [RS:0;jenkins-hbase20:45319] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,45319,1684965501441; zookeeper connection closed. 2023-05-24 21:58:23,018 DEBUG [Listener at localhost.localdomain/40439-EventThread] zookeeper.ZKWatcher(600): regionserver:45319-0x1017f7b41f00001, quorum=127.0.0.1:60543, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-05-24 21:58:23,018 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@299ff829] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@299ff829 2023-05-24 21:58:23,019 INFO [Listener at localhost.localdomain/40439] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 1 regionserver(s) complete 2023-05-24 21:58:23,019 WARN [Listener at localhost.localdomain/40439] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:58:23,022 INFO [Listener at localhost.localdomain/40439] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:58:23,129 WARN [BP-2004606876-148.251.75.209-1684965500921 heartbeating to localhost.localdomain/127.0.0.1:37293] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:58:23,129 WARN [BP-2004606876-148.251.75.209-1684965500921 heartbeating to localhost.localdomain/127.0.0.1:37293] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2004606876-148.251.75.209-1684965500921 (Datanode Uuid be29db46-f4c8-4d28-938d-a5e192b30500) service to localhost.localdomain/127.0.0.1:37293 2023-05-24 21:58:23,130 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/dfs/data/data3/current/BP-2004606876-148.251.75.209-1684965500921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:23,130 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/dfs/data/data4/current/BP-2004606876-148.251.75.209-1684965500921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:23,131 WARN [Listener at localhost.localdomain/40439] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-05-24 21:58:23,135 INFO [Listener at localhost.localdomain/40439] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-05-24 21:58:23,238 WARN [BP-2004606876-148.251.75.209-1684965500921 heartbeating to localhost.localdomain/127.0.0.1:37293] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-05-24 21:58:23,238 WARN [BP-2004606876-148.251.75.209-1684965500921 heartbeating to localhost.localdomain/127.0.0.1:37293] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2004606876-148.251.75.209-1684965500921 (Datanode Uuid 008a5dc0-b0bd-4c86-8129-3cfc56a2c028) service to localhost.localdomain/127.0.0.1:37293 2023-05-24 21:58:23,239 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/dfs/data/data1/current/BP-2004606876-148.251.75.209-1684965500921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:23,240 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/7ee55b74-6345-3ee2-2c56-7aaabaf9ca48/cluster_51e0f9e2-d0dd-c5df-d59e-cdadff227943/dfs/data/data2/current/BP-2004606876-148.251.75.209-1684965500921] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-05-24 21:58:23,256 INFO [Listener at localhost.localdomain/40439] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-05-24 21:58:23,369 INFO [Listener at localhost.localdomain/40439] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-05-24 21:58:23,381 INFO [Listener at localhost.localdomain/40439] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-05-24 21:58:23,392 INFO [Listener at localhost.localdomain/40439] hbase.ResourceChecker(175): after: regionserver.wal.TestLogRolling#testLogRollOnNothingWritten Thread=129 (was 105) - Thread LEAK? -, OpenFileDescriptor=553 (was 532) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=69 (was 75), ProcessCount=168 (was 168), AvailableMemoryMB=8736 (was 8743)