2023-02-08 03:03:13,996 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac 2023-02-08 03:03:14,009 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2 timeout: 13 mins 2023-02-08 03:03:14,044 INFO [Time-limited test] hbase.ResourceChecker(147): before: client.TestAsyncClusterAdminApi2#testStop Thread=8, OpenFileDescriptor=260, MaxFileDescriptor=60000, SystemLoadAverage=390, ProcessCount=171, AvailableMemoryMB=3571 2023-02-08 03:03:14,051 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-02-08 03:03:14,051 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22, deleteOnExit=true 2023-02-08 03:03:14,051 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-02-08 03:03:14,052 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/test.cache.data in system properties and HBase conf 2023-02-08 03:03:14,052 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/hadoop.tmp.dir in system properties and HBase conf 2023-02-08 03:03:14,053 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/hadoop.log.dir in system properties and HBase conf 2023-02-08 03:03:14,053 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/mapreduce.cluster.local.dir in system properties and HBase conf 2023-02-08 03:03:14,054 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-02-08 03:03:14,054 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-02-08 03:03:14,167 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-02-08 03:03:14,526 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-02-08 03:03:14,530 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-02-08 03:03:14,530 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-02-08 03:03:14,530 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-02-08 03:03:14,531 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-08 03:03:14,531 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-02-08 03:03:14,531 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-02-08 03:03:14,531 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-08 03:03:14,532 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-08 03:03:14,532 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-02-08 03:03:14,532 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/nfs.dump.dir in system properties and HBase conf 2023-02-08 03:03:14,533 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/java.io.tmpdir in system properties and HBase conf 2023-02-08 03:03:14,533 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-08 03:03:14,533 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-02-08 03:03:14,533 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-02-08 03:03:14,949 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-08 03:03:14,953 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-08 03:03:15,731 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-02-08 03:03:15,860 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-02-08 03:03:15,873 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:15,906 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:15,932 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/java.io.tmpdir/Jetty_localhost_localdomain_36971_hdfs____ru1e96/webapp 2023-02-08 03:03:16,061 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36971 2023-02-08 03:03:16,069 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-08 03:03:16,069 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-08 03:03:16,657 WARN [Listener at localhost.localdomain/41189] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:16,737 WARN [Listener at localhost.localdomain/41189] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:16,752 WARN [Listener at localhost.localdomain/41189] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:16,759 INFO [Listener at localhost.localdomain/41189] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:16,763 INFO [Listener at localhost.localdomain/41189] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/java.io.tmpdir/Jetty_localhost_37509_datanode____cn7qbl/webapp 2023-02-08 03:03:16,852 INFO [Listener at localhost.localdomain/41189] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37509 2023-02-08 03:03:17,161 WARN [Listener at localhost.localdomain/35765] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:17,169 WARN [Listener at localhost.localdomain/35765] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:17,172 WARN [Listener at localhost.localdomain/35765] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:17,174 INFO [Listener at localhost.localdomain/35765] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:17,180 INFO [Listener at localhost.localdomain/35765] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/java.io.tmpdir/Jetty_localhost_43899_datanode____.2omsyo/webapp 2023-02-08 03:03:17,267 INFO [Listener at localhost.localdomain/35765] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43899 2023-02-08 03:03:17,279 WARN [Listener at localhost.localdomain/38279] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:17,292 WARN [Listener at localhost.localdomain/38279] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:17,296 WARN [Listener at localhost.localdomain/38279] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:17,298 INFO [Listener at localhost.localdomain/38279] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:17,305 INFO [Listener at localhost.localdomain/38279] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/java.io.tmpdir/Jetty_localhost_38973_datanode____3hi5vb/webapp 2023-02-08 03:03:17,386 INFO [Listener at localhost.localdomain/38279] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38973 2023-02-08 03:03:17,394 WARN [Listener at localhost.localdomain/42545] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:18,826 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x57f9b76db5b60961: Processing first storage report for DS-0aa7a44f-20d5-4a1d-9862-125b547173d2 from datanode b3a0df47-9464-4fb4-a2ea-e973fa7789df 2023-02-08 03:03:18,827 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x57f9b76db5b60961: from storage DS-0aa7a44f-20d5-4a1d-9862-125b547173d2 node DatanodeRegistration(127.0.0.1:38117, datanodeUuid=b3a0df47-9464-4fb4-a2ea-e973fa7789df, infoPort=37309, infoSecurePort=0, ipcPort=35765, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,827 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2125c4b8b68ef659: Processing first storage report for DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b from datanode a97f1446-8120-4937-8ae2-b8820660b183 2023-02-08 03:03:18,827 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2125c4b8b68ef659: from storage DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b node DatanodeRegistration(127.0.0.1:44597, datanodeUuid=a97f1446-8120-4937-8ae2-b8820660b183, infoPort=42355, infoSecurePort=0, ipcPort=38279, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,827 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6e9277c322798399: Processing first storage report for DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53 from datanode b128cda4-c4d3-4954-88b2-0e89dc22df49 2023-02-08 03:03:18,828 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6e9277c322798399: from storage DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53 node DatanodeRegistration(127.0.0.1:45673, datanodeUuid=b128cda4-c4d3-4954-88b2-0e89dc22df49, infoPort=38875, infoSecurePort=0, ipcPort=42545, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,828 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x57f9b76db5b60961: Processing first storage report for DS-5ffef363-14a4-41f2-8782-938dd083843c from datanode b3a0df47-9464-4fb4-a2ea-e973fa7789df 2023-02-08 03:03:18,828 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x57f9b76db5b60961: from storage DS-5ffef363-14a4-41f2-8782-938dd083843c node DatanodeRegistration(127.0.0.1:38117, datanodeUuid=b3a0df47-9464-4fb4-a2ea-e973fa7789df, infoPort=37309, infoSecurePort=0, ipcPort=35765, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,828 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2125c4b8b68ef659: Processing first storage report for DS-d2e7fa10-29f2-4818-9135-19a1d8d82738 from datanode a97f1446-8120-4937-8ae2-b8820660b183 2023-02-08 03:03:18,828 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2125c4b8b68ef659: from storage DS-d2e7fa10-29f2-4818-9135-19a1d8d82738 node DatanodeRegistration(127.0.0.1:44597, datanodeUuid=a97f1446-8120-4937-8ae2-b8820660b183, infoPort=42355, infoSecurePort=0, ipcPort=38279, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,829 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6e9277c322798399: Processing first storage report for DS-0fb87974-2f11-49a2-a41c-6486dff42d0e from datanode b128cda4-c4d3-4954-88b2-0e89dc22df49 2023-02-08 03:03:18,829 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6e9277c322798399: from storage DS-0fb87974-2f11-49a2-a41c-6486dff42d0e node DatanodeRegistration(127.0.0.1:45673, datanodeUuid=b128cda4-c4d3-4954-88b2-0e89dc22df49, infoPort=38875, infoSecurePort=0, ipcPort=42545, storageInfo=lv=-57;cid=testClusterID;nsid=1762256676;c=1675825395009), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-02-08 03:03:18,848 DEBUG [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac 2023-02-08 03:03:18,900 INFO [Listener at localhost.localdomain/42545] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/zookeeper_0, clientPort=65121, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-02-08 03:03:18,915 INFO [Listener at localhost.localdomain/42545] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=65121 2023-02-08 03:03:18,921 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:18,923 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:19,579 INFO [Listener at localhost.localdomain/42545] util.FSUtils(479): Created version file at hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916 with version=8 2023-02-08 03:03:19,580 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/hbase-staging 2023-02-08 03:03:19,859 INFO [Listener at localhost.localdomain/42545] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-02-08 03:03:20,213 INFO [Listener at localhost.localdomain/42545] client.ConnectionUtils(127): master/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:20,237 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,238 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,238 INFO [Listener at localhost.localdomain/42545] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:20,238 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,238 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:20,350 INFO [Listener at localhost.localdomain/42545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:20,409 DEBUG [Listener at localhost.localdomain/42545] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-02-08 03:03:20,481 INFO [Listener at localhost.localdomain/42545] ipc.NettyRpcServer(120): Bind to /136.243.104.168:41409 2023-02-08 03:03:20,489 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,491 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,508 INFO [Listener at localhost.localdomain/42545] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41409 connecting to ZooKeeper ensemble=127.0.0.1:65121 2023-02-08 03:03:20,654 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:414090x0, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:20,659 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): master:41409-0x10140860fd10000 connected 2023-02-08 03:03:20,768 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:20,771 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:20,779 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:20,785 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41409 2023-02-08 03:03:20,786 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41409 2023-02-08 03:03:20,786 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41409 2023-02-08 03:03:20,786 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41409 2023-02-08 03:03:20,787 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41409 2023-02-08 03:03:20,792 INFO [Listener at localhost.localdomain/42545] master.HMaster(439): hbase.rootdir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916, hbase.cluster.distributed=false 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,850 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:20,854 INFO [Listener at localhost.localdomain/42545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:20,857 INFO [Listener at localhost.localdomain/42545] ipc.NettyRpcServer(120): Bind to /136.243.104.168:44017 2023-02-08 03:03:20,859 INFO [Listener at localhost.localdomain/42545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:20,864 DEBUG [Listener at localhost.localdomain/42545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:20,866 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,867 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,869 INFO [Listener at localhost.localdomain/42545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44017 connecting to ZooKeeper ensemble=127.0.0.1:65121 2023-02-08 03:03:20,880 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:440170x0, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:20,882 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:44017-0x10140860fd10001 connected 2023-02-08 03:03:20,882 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:20,884 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:20,885 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:20,885 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44017 2023-02-08 03:03:20,886 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44017 2023-02-08 03:03:20,887 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44017 2023-02-08 03:03:20,888 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44017 2023-02-08 03:03:20,888 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44017 2023-02-08 03:03:20,901 INFO [Listener at localhost.localdomain/42545] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:20,902 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,902 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,902 INFO [Listener at localhost.localdomain/42545] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:20,903 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,903 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:20,903 INFO [Listener at localhost.localdomain/42545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:20,905 INFO [Listener at localhost.localdomain/42545] ipc.NettyRpcServer(120): Bind to /136.243.104.168:45163 2023-02-08 03:03:20,905 INFO [Listener at localhost.localdomain/42545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:20,906 DEBUG [Listener at localhost.localdomain/42545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:20,907 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,909 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,910 INFO [Listener at localhost.localdomain/42545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45163 connecting to ZooKeeper ensemble=127.0.0.1:65121 2023-02-08 03:03:20,923 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:451630x0, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:20,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:45163-0x10140860fd10002 connected 2023-02-08 03:03:20,924 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:20,925 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:20,926 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:20,926 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45163 2023-02-08 03:03:20,926 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45163 2023-02-08 03:03:20,927 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45163 2023-02-08 03:03:20,927 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45163 2023-02-08 03:03:20,928 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45163 2023-02-08 03:03:20,941 INFO [Listener at localhost.localdomain/42545] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:20,941 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,941 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,942 INFO [Listener at localhost.localdomain/42545] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:20,942 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:20,942 INFO [Listener at localhost.localdomain/42545] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:20,942 INFO [Listener at localhost.localdomain/42545] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:20,944 INFO [Listener at localhost.localdomain/42545] ipc.NettyRpcServer(120): Bind to /136.243.104.168:40931 2023-02-08 03:03:20,944 INFO [Listener at localhost.localdomain/42545] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:20,945 DEBUG [Listener at localhost.localdomain/42545] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:20,946 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,948 INFO [Listener at localhost.localdomain/42545] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:20,950 INFO [Listener at localhost.localdomain/42545] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40931 connecting to ZooKeeper ensemble=127.0.0.1:65121 2023-02-08 03:03:20,965 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:409310x0, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:20,966 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:40931-0x10140860fd10003 connected 2023-02-08 03:03:20,966 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:20,967 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:20,968 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ZKUtil(164): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:20,969 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40931 2023-02-08 03:03:20,969 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40931 2023-02-08 03:03:20,970 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40931 2023-02-08 03:03:20,970 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40931 2023-02-08 03:03:20,971 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40931 2023-02-08 03:03:20,973 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2158): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:20,992 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-08 03:03:20,994 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:21,023 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:21,023 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:21,023 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:21,023 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:21,024 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:21,025 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-08 03:03:21,027 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(224): Deleting ZNode for /hbase/backup-masters/jenkins-hbase12.apache.org,41409,1675825399690 from backup master directory 2023-02-08 03:03:21,027 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-08 03:03:21,038 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:21,039 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-08 03:03:21,039 WARN [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:21,040 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(234): Registered as active master=jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:21,043 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-02-08 03:03:21,045 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-02-08 03:03:21,138 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] util.FSUtils(628): Created cluster ID file at hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/hbase.id with ID: b6af344e-768a-4b34-a13d-f3d66ef414c7 2023-02-08 03:03:21,180 INFO [master/jenkins-hbase12:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:21,206 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:21,253 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7362284b to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:21,298 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@388ebb0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:21,318 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:21,320 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-02-08 03:03:21,335 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-02-08 03:03:21,335 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-02-08 03:03:21,337 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:849) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2178) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:523) at java.lang.Thread.run(Thread.java:750) 2023-02-08 03:03:21,341 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:849) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2178) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:523) at java.lang.Thread.run(Thread.java:750) 2023-02-08 03:03:21,342 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:21,369 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7689): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store-tmp 2023-02-08 03:03:21,404 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:21,405 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-08 03:03:21,405 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:21,405 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:21,405 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-08 03:03:21,406 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:21,406 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:21,406 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: Waiting for close lock at 1675825401405Disabling compacts and flushes for region at 1675825401405Disabling writes for close at 1675825401406 (+1 ms)Writing region close event to WAL at 1675825401406Closed at 1675825401406 2023-02-08 03:03:21,408 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/WALs/jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:21,429 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C41409%2C1675825399690, suffix=, logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/WALs/jenkins-hbase12.apache.org,41409,1675825399690, archiveDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/oldWALs, maxLogs=10 2023-02-08 03:03:21,491 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK] 2023-02-08 03:03:21,491 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK] 2023-02-08 03:03:21,491 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK] 2023-02-08 03:03:21,498 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-02-08 03:03:21,554 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/WALs/jenkins-hbase12.apache.org,41409,1675825399690/jenkins-hbase12.apache.org%2C41409%2C1675825399690.1675825401437 2023-02-08 03:03:21,554 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK], DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK], DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK]] 2023-02-08 03:03:21,555 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7850): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:21,555 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:21,558 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7890): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,559 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7893): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,609 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,615 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-02-08 03:03:21,636 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-02-08 03:03:21,647 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:21,653 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,655 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,670 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1054): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:21,675 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:21,676 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1071): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=60523367, jitterRate=-0.09813155233860016}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-08 03:03:21,676 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(964): Region open journal for 1595e783b53d99cd5eef43b6debb2682: Writing region info on filesystem at 1675825401582Initializing all the Stores at 1675825401585 (+3 ms)Instantiating store for column family {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825401586 (+1 ms)Cleaning up temporary data from old regions at 1675825401660 (+74 ms)Cleaning up detritus from prior splits at 1675825401661 (+1 ms)Region opened successfully at 1675825401676 (+15 ms) 2023-02-08 03:03:21,677 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-02-08 03:03:21,693 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-02-08 03:03:21,693 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-02-08 03:03:21,695 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-02-08 03:03:21,697 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-02-08 03:03:21,722 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 24 msec 2023-02-08 03:03:21,722 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-02-08 03:03:21,745 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-02-08 03:03:21,750 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-02-08 03:03:21,771 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-02-08 03:03:21,774 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-02-08 03:03:21,775 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-02-08 03:03:21,779 INFO [master/jenkins-hbase12:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-02-08 03:03:21,783 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-02-08 03:03:21,849 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:21,850 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-02-08 03:03:21,851 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-02-08 03:03:21,862 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-02-08 03:03:21,876 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:21,876 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:21,876 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:21,876 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:21,876 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:21,877 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(739): Active/primary master=jenkins-hbase12.apache.org,41409,1675825399690, sessionid=0x10140860fd10000, setting cluster-up flag (Was=false) 2023-02-08 03:03:21,912 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:21,943 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-02-08 03:03:21,945 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:21,971 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:22,007 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-02-08 03:03:22,011 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:22,014 WARN [master/jenkins-hbase12:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.hbase-snapshot/.tmp 2023-02-08 03:03:22,075 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(952): ClusterId : b6af344e-768a-4b34-a13d-f3d66ef414c7 2023-02-08 03:03:22,075 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(952): ClusterId : b6af344e-768a-4b34-a13d-f3d66ef414c7 2023-02-08 03:03:22,075 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(952): ClusterId : b6af344e-768a-4b34-a13d-f3d66ef414c7 2023-02-08 03:03:22,080 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:22,080 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:22,080 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:22,103 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:22,103 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:22,103 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:22,103 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:22,103 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:22,103 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:22,107 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:22,115 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:22,115 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase12:0, corePoolSize=10, maxPoolSize=10 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:22,116 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,117 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1675825432117 2023-02-08 03:03:22,119 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-02-08 03:03:22,122 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:22,123 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-02-08 03:03:22,123 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:22,123 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:22,123 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:22,125 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ReadOnlyZKClient(139): Connect 0x5c203099 to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:22,125 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ReadOnlyZKClient(139): Connect 0x4168517d to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:22,125 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ReadOnlyZKClient(139): Connect 0x37c9bba2 to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:22,165 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-02-08 03:03:22,167 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:22,174 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-02-08 03:03:22,174 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-02-08 03:03:22,175 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-02-08 03:03:22,175 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-02-08 03:03:22,175 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,176 DEBUG [RS:1;jenkins-hbase12:45163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fafb720, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:22,179 DEBUG [RS:1;jenkins-hbase12:45163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b903a0d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:22,179 DEBUG [RS:2;jenkins-hbase12:40931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@390e432b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:22,179 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-02-08 03:03:22,179 DEBUG [RS:2;jenkins-hbase12:40931] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d31d0d2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:22,180 DEBUG [RS:0;jenkins-hbase12:44017] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52e7c56d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:22,180 DEBUG [RS:0;jenkins-hbase12:44017] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@56d85be9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:22,181 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-02-08 03:03:22,181 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-02-08 03:03:22,184 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-02-08 03:03:22,184 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-02-08 03:03:22,187 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825402185,5,FailOnTimeoutGroup] 2023-02-08 03:03:22,187 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825402187,5,FailOnTimeoutGroup] 2023-02-08 03:03:22,187 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,188 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1451): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-02-08 03:03:22,189 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,189 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,212 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:22,213 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:22,214 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase12:44017 2023-02-08 03:03:22,214 INFO [PEWorker-1] regionserver.HRegion(7671): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916 2023-02-08 03:03:22,214 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase12:45163 2023-02-08 03:03:22,215 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase12:40931 2023-02-08 03:03:22,219 INFO [RS:0;jenkins-hbase12:44017] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:22,220 INFO [RS:0;jenkins-hbase12:44017] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:22,219 INFO [RS:2;jenkins-hbase12:40931] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:22,219 INFO [RS:1;jenkins-hbase12:45163] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:22,220 INFO [RS:1;jenkins-hbase12:45163] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:22,220 INFO [RS:2;jenkins-hbase12:40931] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:22,220 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:22,220 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:22,220 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:22,222 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,41409,1675825399690 with isa=jenkins-hbase12.apache.org/136.243.104.168:40931, startcode=1675825400940 2023-02-08 03:03:22,222 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,41409,1675825399690 with isa=jenkins-hbase12.apache.org/136.243.104.168:45163, startcode=1675825400901 2023-02-08 03:03:22,222 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,41409,1675825399690 with isa=jenkins-hbase12.apache.org/136.243.104.168:44017, startcode=1675825400849 2023-02-08 03:03:22,236 DEBUG [PEWorker-1] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:22,239 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-08 03:03:22,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/info 2023-02-08 03:03:22,245 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-08 03:03:22,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-08 03:03:22,249 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:22,249 DEBUG [RS:1;jenkins-hbase12:45163] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:22,249 DEBUG [RS:2;jenkins-hbase12:40931] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:22,249 DEBUG [RS:0;jenkins-hbase12:44017] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:22,250 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-08 03:03:22,251 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-08 03:03:22,254 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/table 2023-02-08 03:03:22,255 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-08 03:03:22,255 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,257 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740 2023-02-08 03:03:22,258 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740 2023-02-08 03:03:22,261 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-08 03:03:22,263 DEBUG [PEWorker-1] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-08 03:03:22,268 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:22,269 INFO [PEWorker-1] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=68449034, jitterRate=0.019970089197158813}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-08 03:03:22,269 DEBUG [PEWorker-1] regionserver.HRegion(964): Region open journal for 1588230740: Writing region info on filesystem at 1675825402237Initializing all the Stores at 1675825402238 (+1 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825402238Instantiating store for column family {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825402238Instantiating store for column family {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825402238Cleaning up temporary data from old regions at 1675825402260 (+22 ms)Cleaning up detritus from prior splits at 1675825402260Region opened successfully at 1675825402269 (+9 ms) 2023-02-08 03:03:22,270 DEBUG [PEWorker-1] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-08 03:03:22,270 INFO [PEWorker-1] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-08 03:03:22,270 DEBUG [PEWorker-1] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-08 03:03:22,270 DEBUG [PEWorker-1] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-08 03:03:22,270 DEBUG [PEWorker-1] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-08 03:03:22,272 INFO [PEWorker-1] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:22,272 DEBUG [PEWorker-1] regionserver.HRegion(1557): Region close journal for 1588230740: Waiting for close lock at 1675825402270Disabling compacts and flushes for region at 1675825402270Disabling writes for close at 1675825402270Writing region close event to WAL at 1675825402272 (+2 ms)Closed at 1675825402272 2023-02-08 03:03:22,285 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:22,285 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-02-08 03:03:22,294 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-02-08 03:03:22,301 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:60451, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:22,301 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:59075, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:22,301 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:58119, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:22,307 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-02-08 03:03:22,310 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-02-08 03:03:22,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41409] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,315 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41409] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,316 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41409] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,331 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916 2023-02-08 03:03:22,331 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916 2023-02-08 03:03:22,331 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916 2023-02-08 03:03:22,331 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41189 2023-02-08 03:03:22,331 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41189 2023-02-08 03:03:22,332 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:22,331 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:22,331 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:41189 2023-02-08 03:03:22,332 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:22,374 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:22,392 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ZKUtil(162): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,392 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ZKUtil(162): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,392 WARN [RS:0;jenkins-hbase12:44017] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:22,392 WARN [RS:1;jenkins-hbase12:45163] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:22,393 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,393 INFO [RS:1;jenkins-hbase12:45163] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:22,394 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,45163,1675825400901] 2023-02-08 03:03:22,393 WARN [RS:2;jenkins-hbase12:40931] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:22,393 INFO [RS:0;jenkins-hbase12:44017] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:22,394 INFO [RS:2;jenkins-hbase12:40931] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:22,394 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,394 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,394 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,394 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,40931,1675825400940] 2023-02-08 03:03:22,394 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,44017,1675825400849] 2023-02-08 03:03:22,410 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ZKUtil(162): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,410 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ZKUtil(162): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,410 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,411 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ZKUtil(162): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,411 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ZKUtil(162): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,412 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,412 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ZKUtil(162): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,412 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ZKUtil(162): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,412 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ZKUtil(162): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,422 DEBUG [RS:2;jenkins-hbase12:40931] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:22,422 DEBUG [RS:1;jenkins-hbase12:45163] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:22,422 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:22,431 INFO [RS:1;jenkins-hbase12:45163] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:22,431 INFO [RS:0;jenkins-hbase12:44017] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:22,431 INFO [RS:2;jenkins-hbase12:40931] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:22,453 INFO [RS:0;jenkins-hbase12:44017] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:22,453 INFO [RS:1;jenkins-hbase12:45163] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:22,453 INFO [RS:2;jenkins-hbase12:40931] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:22,456 INFO [RS:0;jenkins-hbase12:44017] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:22,456 INFO [RS:2;jenkins-hbase12:40931] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:22,457 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,456 INFO [RS:1;jenkins-hbase12:45163] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:22,457 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,458 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,458 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:22,458 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:22,458 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:22,462 DEBUG [jenkins-hbase12:41409] assignment.AssignmentManager(2178): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-02-08 03:03:22,466 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,466 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,466 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,466 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:0;jenkins-hbase12:44017] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,467 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,469 DEBUG [RS:2;jenkins-hbase12:40931] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,468 DEBUG [jenkins-hbase12:41409] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-08 03:03:22,469 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,470 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,470 DEBUG [RS:1;jenkins-hbase12:45163] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:22,471 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,471 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,471 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,471 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,471 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,476 DEBUG [jenkins-hbase12:41409] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-08 03:03:22,476 DEBUG [jenkins-hbase12:41409] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-08 03:03:22,476 DEBUG [jenkins-hbase12:41409] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-08 03:03:22,476 DEBUG [jenkins-hbase12:41409] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-08 03:03:22,479 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,479 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,479 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,480 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,44017,1675825400849, state=OPENING 2023-02-08 03:03:22,489 INFO [RS:0;jenkins-hbase12:44017] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:22,491 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,44017,1675825400849-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,492 INFO [RS:2;jenkins-hbase12:40931] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:22,493 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,40931,1675825400940-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,494 INFO [RS:1;jenkins-hbase12:45163] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:22,494 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,45163,1675825400901-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:22,497 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-02-08 03:03:22,507 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:22,509 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-08 03:03:22,512 INFO [RS:2;jenkins-hbase12:40931] regionserver.Replication(203): jenkins-hbase12.apache.org,40931,1675825400940 started 2023-02-08 03:03:22,512 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,40931,1675825400940, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:40931, sessionid=0x10140860fd10003 2023-02-08 03:03:22,513 INFO [RS:0;jenkins-hbase12:44017] regionserver.Replication(203): jenkins-hbase12.apache.org,44017,1675825400849 started 2023-02-08 03:03:22,513 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,44017,1675825400849, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:44017, sessionid=0x10140860fd10001 2023-02-08 03:03:22,513 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:22,513 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:22,513 DEBUG [RS:2;jenkins-hbase12:40931] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,513 INFO [RS:1;jenkins-hbase12:45163] regionserver.Replication(203): jenkins-hbase12.apache.org,45163,1675825400901 started 2023-02-08 03:03:22,513 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,44017,1675825400849}] 2023-02-08 03:03:22,515 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,45163,1675825400901, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:45163, sessionid=0x10140860fd10002 2023-02-08 03:03:22,515 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,40931,1675825400940' 2023-02-08 03:03:22,515 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:22,515 DEBUG [RS:1;jenkins-hbase12:45163] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,513 DEBUG [RS:0;jenkins-hbase12:44017] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,515 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,44017,1675825400849' 2023-02-08 03:03:22,515 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,45163,1675825400901' 2023-02-08 03:03:22,515 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:22,515 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:22,515 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:22,516 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:22,516 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:22,516 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:22,517 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:22,517 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:22,517 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:22,517 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:22,517 DEBUG [RS:0;jenkins-hbase12:44017] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,517 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:22,517 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:22,517 DEBUG [RS:2;jenkins-hbase12:40931] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:22,517 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,44017,1675825400849' 2023-02-08 03:03:22,517 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:22,517 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,40931,1675825400940' 2023-02-08 03:03:22,518 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:22,517 DEBUG [RS:1;jenkins-hbase12:45163] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:22,518 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,45163,1675825400901' 2023-02-08 03:03:22,518 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:22,518 DEBUG [RS:0;jenkins-hbase12:44017] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:22,518 DEBUG [RS:2;jenkins-hbase12:40931] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:22,518 DEBUG [RS:1;jenkins-hbase12:45163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:22,518 DEBUG [RS:0;jenkins-hbase12:44017] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:22,519 INFO [RS:0;jenkins-hbase12:44017] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:22,519 INFO [RS:0;jenkins-hbase12:44017] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:22,519 DEBUG [RS:1;jenkins-hbase12:45163] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:22,519 DEBUG [RS:2;jenkins-hbase12:40931] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:22,519 INFO [RS:1;jenkins-hbase12:45163] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:22,519 INFO [RS:1;jenkins-hbase12:45163] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:22,519 INFO [RS:2;jenkins-hbase12:40931] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:22,519 INFO [RS:2;jenkins-hbase12:40931] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:22,633 INFO [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C44017%2C1675825400849, suffix=, logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,44017,1675825400849, archiveDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs, maxLogs=32 2023-02-08 03:03:22,633 INFO [RS:2;jenkins-hbase12:40931] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C40931%2C1675825400940, suffix=, logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,40931,1675825400940, archiveDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs, maxLogs=32 2023-02-08 03:03:22,633 INFO [RS:1;jenkins-hbase12:45163] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C45163%2C1675825400901, suffix=, logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,45163,1675825400901, archiveDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs, maxLogs=32 2023-02-08 03:03:22,652 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK] 2023-02-08 03:03:22,652 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK] 2023-02-08 03:03:22,652 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK] 2023-02-08 03:03:22,665 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK] 2023-02-08 03:03:22,665 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK] 2023-02-08 03:03:22,665 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK] 2023-02-08 03:03:22,667 INFO [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,44017,1675825400849/jenkins-hbase12.apache.org%2C44017%2C1675825400849.1675825402636 2023-02-08 03:03:22,669 DEBUG [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK], DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK], DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK]] 2023-02-08 03:03:22,670 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK] 2023-02-08 03:03:22,670 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK] 2023-02-08 03:03:22,670 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK] 2023-02-08 03:03:22,677 INFO [RS:1;jenkins-hbase12:45163] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,45163,1675825400901/jenkins-hbase12.apache.org%2C45163%2C1675825400901.1675825402636 2023-02-08 03:03:22,677 DEBUG [RS:1;jenkins-hbase12:45163] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK], DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK], DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK]] 2023-02-08 03:03:22,679 INFO [RS:2;jenkins-hbase12:40931] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,40931,1675825400940/jenkins-hbase12.apache.org%2C40931%2C1675825400940.1675825402636 2023-02-08 03:03:22,680 DEBUG [RS:2;jenkins-hbase12:40931] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK], DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK], DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK]] 2023-02-08 03:03:22,703 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:22,705 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-08 03:03:22,708 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43434, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-08 03:03:22,724 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:meta,,1.1588230740 2023-02-08 03:03:22,724 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:22,728 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C44017%2C1675825400849.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,44017,1675825400849, archiveDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs, maxLogs=32 2023-02-08 03:03:22,747 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK] 2023-02-08 03:03:22,747 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK] 2023-02-08 03:03:22,748 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK] 2023-02-08 03:03:22,755 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/WALs/jenkins-hbase12.apache.org,44017,1675825400849/jenkins-hbase12.apache.org%2C44017%2C1675825400849.meta.1675825402729.meta 2023-02-08 03:03:22,755 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45673,DS-fdc91634-afb3-4a73-8bc9-dcacb169bd53,DISK], DatanodeInfoWithStorage[127.0.0.1:38117,DS-0aa7a44f-20d5-4a1d-9862-125b547173d2,DISK], DatanodeInfoWithStorage[127.0.0.1:44597,DS-a67cbfa3-d25e-4edf-b3f0-b542ba75a55b,DISK]] 2023-02-08 03:03:22,755 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:22,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-02-08 03:03:22,772 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(8546): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-02-08 03:03:22,776 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-02-08 03:03:22,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-02-08 03:03:22,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:22,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 1588230740 2023-02-08 03:03:22,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 1588230740 2023-02-08 03:03:22,782 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-08 03:03:22,784 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/info 2023-02-08 03:03:22,784 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/info 2023-02-08 03:03:22,784 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-08 03:03:22,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-08 03:03:22,787 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:22,787 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:22,787 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-08 03:03:22,788 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,788 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-08 03:03:22,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/table 2023-02-08 03:03:22,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/table 2023-02-08 03:03:22,790 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-08 03:03:22,790 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:22,792 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740 2023-02-08 03:03:22,795 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740 2023-02-08 03:03:22,799 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-08 03:03:22,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-08 03:03:22,804 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=64910564, jitterRate=-0.0327572226524353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-08 03:03:22,805 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 1588230740: Running coprocessor pre-open hook at 1675825402780Writing region info on filesystem at 1675825402780Initializing all the Stores at 1675825402782 (+2 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825402782Instantiating store for column family {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825402782Instantiating store for column family {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825402782Cleaning up temporary data from old regions at 1675825402796 (+14 ms)Cleaning up detritus from prior splits at 1675825402797 (+1 ms)Running coprocessor post-open hooks at 1675825402804 (+7 ms)Region opened successfully at 1675825402805 (+1 ms) 2023-02-08 03:03:22,812 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1675825402695 2023-02-08 03:03:22,828 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:meta,,1.1588230740 2023-02-08 03:03:22,828 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:meta,,1.1588230740 2023-02-08 03:03:22,829 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,44017,1675825400849, state=OPEN 2023-02-08 03:03:22,880 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-02-08 03:03:22,881 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-08 03:03:22,890 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-02-08 03:03:22,891 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,44017,1675825400849 in 369 msec 2023-02-08 03:03:22,896 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-02-08 03:03:22,896 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 598 msec 2023-02-08 03:03:22,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 849 msec 2023-02-08 03:03:22,902 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(946): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1675825401010, completionTime=-1 2023-02-08 03:03:22,902 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-02-08 03:03:22,903 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1519): Joining cluster... 2023-02-08 03:03:22,961 DEBUG [hconnection-0x59b82f61-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-08 03:03:22,964 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43448, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-08 03:03:22,981 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1531): Number of RegionServers=3 2023-02-08 03:03:22,981 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1675825462981 2023-02-08 03:03:22,981 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1675825522981 2023-02-08 03:03:22,981 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1538): Joined the cluster in 78 msec 2023-02-08 03:03:23,020 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41409,1675825399690-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:23,020 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41409,1675825399690-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:23,020 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41409,1675825399690-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:23,023 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase12:41409, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:23,024 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:23,033 DEBUG [master/jenkins-hbase12:0.Chore.1] janitor.CatalogJanitor(175): 2023-02-08 03:03:23,040 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-02-08 03:03:23,041 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2138): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:23,049 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-02-08 03:03:23,052 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-02-08 03:03:23,056 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-02-08 03:03:23,077 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.tmp/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,080 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.tmp/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee empty. 2023-02-08 03:03:23,081 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.tmp/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,081 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-02-08 03:03:23,124 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:23,127 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7671): creating {ENCODED => 809a9d3a09a6df42a8f670f8902e0fee, NAME => 'hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/.tmp 2023-02-08 03:03:23,146 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:23,146 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1603): Closing 809a9d3a09a6df42a8f670f8902e0fee, disabling compactions & flushes 2023-02-08 03:03:23,146 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,146 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,146 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. after waiting 0 ms 2023-02-08 03:03:23,146 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,147 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1837): Closed hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,147 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1557): Region close journal for 809a9d3a09a6df42a8f670f8902e0fee: Waiting for close lock at 1675825403146Disabling compacts and flushes for region at 1675825403146Disabling writes for close at 1675825403146Writing region close event to WAL at 1675825403146Closed at 1675825403146 2023-02-08 03:03:23,151 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-02-08 03:03:23,168 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825403155"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1675825403155"}]},"ts":"1675825403155"} 2023-02-08 03:03:23,190 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-02-08 03:03:23,192 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-02-08 03:03:23,196 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1675825403192"}]},"ts":"1675825403192"} 2023-02-08 03:03:23,200 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-02-08 03:03:23,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-08 03:03:23,225 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-08 03:03:23,225 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-08 03:03:23,226 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-08 03:03:23,226 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-08 03:03:23,230 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=809a9d3a09a6df42a8f670f8902e0fee, ASSIGN}] 2023-02-08 03:03:23,235 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=809a9d3a09a6df42a8f670f8902e0fee, ASSIGN 2023-02-08 03:03:23,236 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=809a9d3a09a6df42a8f670f8902e0fee, ASSIGN; state=OFFLINE, location=jenkins-hbase12.apache.org,44017,1675825400849; forceNewPlan=false, retain=false 2023-02-08 03:03:23,389 INFO [jenkins-hbase12:41409] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-02-08 03:03:23,390 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=809a9d3a09a6df42a8f670f8902e0fee, regionState=OPENING, regionLocation=jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:23,391 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825403390"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1675825403390"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1675825403390"}]},"ts":"1675825403390"} 2023-02-08 03:03:23,399 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 809a9d3a09a6df42a8f670f8902e0fee, server=jenkins-hbase12.apache.org,44017,1675825400849}] 2023-02-08 03:03:23,564 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 809a9d3a09a6df42a8f670f8902e0fee, NAME => 'hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:23,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:23,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,568 INFO [StoreOpener-809a9d3a09a6df42a8f670f8902e0fee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,570 DEBUG [StoreOpener-809a9d3a09a6df42a8f670f8902e0fee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/info 2023-02-08 03:03:23,570 DEBUG [StoreOpener-809a9d3a09a6df42a8f670f8902e0fee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/info 2023-02-08 03:03:23,571 INFO [StoreOpener-809a9d3a09a6df42a8f670f8902e0fee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 809a9d3a09a6df42a8f670f8902e0fee columnFamilyName info 2023-02-08 03:03:23,572 INFO [StoreOpener-809a9d3a09a6df42a8f670f8902e0fee-1] regionserver.HStore(310): Store=809a9d3a09a6df42a8f670f8902e0fee/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:23,575 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,576 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,581 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:23,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:23,586 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 809a9d3a09a6df42a8f670f8902e0fee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=59572561, jitterRate=-0.11229966580867767}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-08 03:03:23,586 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 809a9d3a09a6df42a8f670f8902e0fee: Running coprocessor pre-open hook at 1675825403566Writing region info on filesystem at 1675825403566Initializing all the Stores at 1675825403567 (+1 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825403567Cleaning up temporary data from old regions at 1675825403577 (+10 ms)Cleaning up detritus from prior splits at 1675825403578 (+1 ms)Running coprocessor post-open hooks at 1675825403586 (+8 ms)Region opened successfully at 1675825403586 2023-02-08 03:03:23,588 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee., pid=6, masterSystemTime=1675825403553 2023-02-08 03:03:23,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,591 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:23,592 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=809a9d3a09a6df42a8f670f8902e0fee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:23,593 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825403592"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1675825403592"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1675825403592"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1675825403592"}]},"ts":"1675825403592"} 2023-02-08 03:03:23,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-02-08 03:03:23,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 809a9d3a09a6df42a8f670f8902e0fee, server=jenkins-hbase12.apache.org,44017,1675825400849 in 198 msec 2023-02-08 03:03:23,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-02-08 03:03:23,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=809a9d3a09a6df42a8f670f8902e0fee, ASSIGN in 370 msec 2023-02-08 03:03:23,604 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-02-08 03:03:23,605 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1675825403604"}]},"ts":"1675825403604"} 2023-02-08 03:03:23,607 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-02-08 03:03:23,672 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-02-08 03:03:23,673 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-02-08 03:03:23,679 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 631 msec 2023-02-08 03:03:23,680 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:23,680 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:23,708 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-02-08 03:03:23,733 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:23,748 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 47 msec 2023-02-08 03:03:23,751 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-02-08 03:03:23,770 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:23,786 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 33 msec 2023-02-08 03:03:23,817 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-02-08 03:03:23,838 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-02-08 03:03:23,839 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1077): Master has completed initialization 2.799sec 2023-02-08 03:03:23,845 INFO [master/jenkins-hbase12:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-02-08 03:03:23,848 INFO [master/jenkins-hbase12:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-02-08 03:03:23,848 INFO [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-02-08 03:03:23,850 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41409,1675825399690-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-02-08 03:03:23,851 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41409,1675825399690-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-02-08 03:03:23,859 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1166): Balancer post startup initialization complete, took 0 seconds 2023-02-08 03:03:23,882 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ReadOnlyZKClient(139): Connect 0x58ba8dd9 to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:23,898 DEBUG [Listener at localhost.localdomain/42545] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48fb1d48, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:23,937 DEBUG [hconnection-0x5b7210b1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-08 03:03:23,948 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43452, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-08 03:03:23,957 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:23,958 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ReadOnlyZKClient(139): Connect 0x41711cd8 to 127.0.0.1:65121 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:23,977 DEBUG [ReadOnlyZKClient-127.0.0.1:65121@0x41711cd8] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4069fb6d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:24,011 DEBUG [Listener at localhost.localdomain/42545] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-08 03:03:24,014 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43460, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-08 03:03:24,015 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44017] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,44017,1675825400849' ***** 2023-02-08 03:03:24,015 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44017] regionserver.HRegionServer(2310): STOPPED: Called by admin client org.apache.hadoop.hbase.client.AsyncConnectionImpl@59e28b9a 2023-02-08 03:03:24,015 INFO [RS:0;jenkins-hbase12:44017] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:24,016 INFO [RS:0;jenkins-hbase12:44017] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:24,016 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:24,017 INFO [RS:0;jenkins-hbase12:44017] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:24,018 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(3304): Received CLOSE for 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:24,019 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:24,019 DEBUG [RS:0;jenkins-hbase12:44017] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c203099 to 127.0.0.1:65121 2023-02-08 03:03:24,020 DEBUG [RS:0;jenkins-hbase12:44017] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 809a9d3a09a6df42a8f670f8902e0fee, disabling compactions & flushes 2023-02-08 03:03:24,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:24,020 INFO [RS:0;jenkins-hbase12:44017] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:24,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:24,020 INFO [RS:0;jenkins-hbase12:44017] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:24,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. after waiting 0 ms 2023-02-08 03:03:24,020 INFO [RS:0;jenkins-hbase12:44017] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:24,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:24,020 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(3304): Received CLOSE for 1588230740 2023-02-08 03:03:24,020 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1475): Waiting on 2 regions to close 2023-02-08 03:03:24,022 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1479): Online Regions={809a9d3a09a6df42a8f670f8902e0fee=hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee., 1588230740=hbase:meta,,1.1588230740} 2023-02-08 03:03:24,022 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-08 03:03:24,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-08 03:03:24,022 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-08 03:03:24,022 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-08 03:03:24,022 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-08 03:03:24,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 809a9d3a09a6df42a8f670f8902e0fee 1/1 column families, dataSize=78 B heapSize=488 B 2023-02-08 03:03:24,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-02-08 03:03:24,023 DEBUG [Listener at localhost.localdomain/42545] client.ConnectionUtils(586): Start fetching master stub from registry 2023-02-08 03:03:24,023 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1505): Waiting on 1588230740, 809a9d3a09a6df42a8f670f8902e0fee 2023-02-08 03:03:24,028 DEBUG [ReadOnlyZKClient-127.0.0.1:65121@0x41711cd8] client.AsyncConnectionImpl(289): The fetched master address is jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:24,031 DEBUG [ReadOnlyZKClient-127.0.0.1:65121@0x41711cd8] client.ConnectionUtils(594): The fetched master stub is org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$Stub@32ca0cc2 2023-02-08 03:03:24,036 DEBUG [ReadOnlyZKClient-127.0.0.1:65121@0x41711cd8] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-02-08 03:03:24,040 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:45698, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-02-08 03:03:24,040 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41409] master.MasterRpcServices(1601): Client=jenkins//136.243.104.168 stop 2023-02-08 03:03:24,040 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41409] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,41409,1675825399690' ***** 2023-02-08 03:03:24,040 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41409] regionserver.HRegionServer(2310): STOPPED: Stopped by RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41409 2023-02-08 03:03:24,041 DEBUG [M:0;jenkins-hbase12:41409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31187c18, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:24,041 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,41409,1675825399690 2023-02-08 03:03:24,042 DEBUG [M:0;jenkins-hbase12:41409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7362284b to 127.0.0.1:65121 2023-02-08 03:03:24,042 DEBUG [M:0;jenkins-hbase12:41409] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,043 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,41409,1675825399690; all regions closed. 2023-02-08 03:03:24,043 DEBUG [M:0;jenkins-hbase12:41409] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,043 DEBUG [M:0;jenkins-hbase12:41409] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-02-08 03:03:24,043 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-02-08 03:03:24,043 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825402187] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825402187,5,FailOnTimeoutGroup] 2023-02-08 03:03:24,044 DEBUG [M:0;jenkins-hbase12:41409] cleaner.HFileCleaner(317): Stopping file delete threads 2023-02-08 03:03:24,044 INFO [M:0;jenkins-hbase12:41409] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-02-08 03:03:24,044 INFO [M:0;jenkins-hbase12:41409] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-02-08 03:03:24,044 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825402185] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825402185,5,FailOnTimeoutGroup] 2023-02-08 03:03:24,045 INFO [M:0;jenkins-hbase12:41409] hbase.ChoreService(369): Chore service for: master/jenkins-hbase12:0 had [] on shutdown 2023-02-08 03:03:24,045 DEBUG [M:0;jenkins-hbase12:41409] master.HMaster(1502): Stopping service threads 2023-02-08 03:03:24,045 INFO [M:0;jenkins-hbase12:41409] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-02-08 03:03:24,045 INFO [M:0;jenkins-hbase12:41409] procedure2.ProcedureExecutor(629): Stopping 2023-02-08 03:03:24,047 ERROR [M:0;jenkins-hbase12:41409] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] 2023-02-08 03:03:24,047 INFO [M:0;jenkins-hbase12:41409] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-02-08 03:03:24,048 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-02-08 03:03:24,059 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:24,059 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:24,059 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:24,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:24,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:24,059 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:24,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:24,059 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:24,059 DEBUG [M:0;jenkins-hbase12:41409] zookeeper.ZKUtil(398): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-02-08 03:03:24,060 WARN [M:0;jenkins-hbase12:41409] master.ActiveMasterManager(323): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-02-08 03:03:24,060 INFO [M:0;jenkins-hbase12:41409] assignment.AssignmentManager(315): Stopping assignment manager 2023-02-08 03:03:24,061 INFO [M:0;jenkins-hbase12:41409] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-02-08 03:03:24,061 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:24,061 INFO [Listener at localhost.localdomain/42545] client.AsyncConnectionImpl(207): Connection has been closed by Listener at localhost.localdomain/42545. 2023-02-08 03:03:24,062 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-08 03:03:24,062 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:24,062 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:24,062 DEBUG [Listener at localhost.localdomain/42545] client.AsyncConnectionImpl(232): Call stack: at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.client.AsyncConnectionImpl.close(AsyncConnectionImpl.java:209) at org.apache.hbase.thirdparty.com.google.common.io.Closeables.close(Closeables.java:79) at org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2.tearDown(TestAsyncClusterAdminApi2.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-02-08 03:03:24,062 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-08 03:03:24,062 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:24,062 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegion(2744): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-02-08 03:03:24,069 DEBUG [Listener at localhost.localdomain/42545] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,070 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x41711cd8 to 127.0.0.1:65121 2023-02-08 03:03:24,071 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-02-08 03:03:24,071 DEBUG [Listener at localhost.localdomain/42545] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58ba8dd9 to 127.0.0.1:65121 2023-02-08 03:03:24,071 DEBUG [Listener at localhost.localdomain/42545] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,072 DEBUG [Listener at localhost.localdomain/42545] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-02-08 03:03:24,072 INFO [Listener at localhost.localdomain/42545] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,45163,1675825400901' ***** 2023-02-08 03:03:24,072 INFO [Listener at localhost.localdomain/42545] regionserver.HRegionServer(2310): STOPPED: Shutdown requested 2023-02-08 03:03:24,072 INFO [Listener at localhost.localdomain/42545] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,40931,1675825400940' ***** 2023-02-08 03:03:24,072 INFO [Listener at localhost.localdomain/42545] regionserver.HRegionServer(2310): STOPPED: Shutdown requested 2023-02-08 03:03:24,072 INFO [RS:1;jenkins-hbase12:45163] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:24,072 INFO [RS:2;jenkins-hbase12:40931] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:24,072 INFO [RS:1;jenkins-hbase12:45163] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:24,072 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:24,072 INFO [RS:1;jenkins-hbase12:45163] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:24,072 INFO [RS:2;jenkins-hbase12:40931] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:24,072 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:24,073 INFO [RS:2;jenkins-hbase12:40931] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:24,073 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:24,073 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:24,074 DEBUG [RS:1;jenkins-hbase12:45163] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4168517d to 127.0.0.1:65121 2023-02-08 03:03:24,074 DEBUG [RS:2;jenkins-hbase12:40931] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37c9bba2 to 127.0.0.1:65121 2023-02-08 03:03:24,074 DEBUG [RS:1;jenkins-hbase12:45163] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,074 DEBUG [RS:2;jenkins-hbase12:40931] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,074 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,45163,1675825400901; all regions closed. 2023-02-08 03:03:24,074 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,40931,1675825400940; all regions closed. 2023-02-08 03:03:24,080 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,081 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,081 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,097 DEBUG [RS:2;jenkins-hbase12:40931] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs 2023-02-08 03:03:24,097 INFO [RS:2;jenkins-hbase12:40931] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C40931%2C1675825400940:(num 1675825402636) 2023-02-08 03:03:24,097 DEBUG [RS:2;jenkins-hbase12:40931] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,097 DEBUG [RS:1;jenkins-hbase12:45163] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs 2023-02-08 03:03:24,097 INFO [RS:2;jenkins-hbase12:40931] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,097 INFO [RS:1;jenkins-hbase12:45163] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C45163%2C1675825400901:(num 1675825402636) 2023-02-08 03:03:24,097 DEBUG [RS:1;jenkins-hbase12:45163] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,098 INFO [RS:1;jenkins-hbase12:45163] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,098 INFO [RS:2;jenkins-hbase12:40931] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:24,098 INFO [RS:1;jenkins-hbase12:45163] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:24,098 INFO [RS:2;jenkins-hbase12:40931] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:24,098 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:24,098 INFO [RS:2;jenkins-hbase12:40931] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:24,098 INFO [RS:1;jenkins-hbase12:45163] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:24,098 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:24,099 INFO [RS:1;jenkins-hbase12:45163] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:24,099 INFO [RS:2;jenkins-hbase12:40931] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:24,099 INFO [RS:1;jenkins-hbase12:45163] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:24,099 INFO [RS:2;jenkins-hbase12:40931] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:40931 2023-02-08 03:03:24,099 INFO [RS:1;jenkins-hbase12:45163] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:45163 2023-02-08 03:03:24,120 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:24,120 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:24,120 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:24,121 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,40931,1675825400940 2023-02-08 03:03:24,121 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@e11021a rejected from java.util.concurrent.ThreadPoolExecutor@a2e8700[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,121 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:24,122 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:24,122 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:24,122 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:24,122 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:24,122 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@55f5350c rejected from java.util.concurrent.ThreadPoolExecutor@51cabb6a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,122 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@78aed8d8 rejected from java.util.concurrent.ThreadPoolExecutor@a2e8700[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,123 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,45163,1675825400901 2023-02-08 03:03:24,125 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@59378fb4 rejected from java.util.concurrent.ThreadPoolExecutor@51cabb6a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/.tmp/info/9e135d8bef0a4924bbf3a9968d2357d7 2023-02-08 03:03:24,137 INFO [M:0;jenkins-hbase12:41409] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/98d4a75d27bd49b89ac27203c0a789f9 2023-02-08 03:03:24,138 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/.tmp/info/8b55113abe1948bd8d34561157bda0ff 2023-02-08 03:03:24,177 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/98d4a75d27bd49b89ac27203c0a789f9 as hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/98d4a75d27bd49b89ac27203c0a789f9 2023-02-08 03:03:24,177 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/.tmp/info/9e135d8bef0a4924bbf3a9968d2357d7 as hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/info/9e135d8bef0a4924bbf3a9968d2357d7 2023-02-08 03:03:24,191 INFO [M:0;jenkins-hbase12:41409] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/98d4a75d27bd49b89ac27203c0a789f9, entries=8, sequenceid=66, filesize=6.3 K 2023-02-08 03:03:24,193 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/info/9e135d8bef0a4924bbf3a9968d2357d7, entries=2, sequenceid=6, filesize=4.8 K 2023-02-08 03:03:24,196 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegion(2947): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 134ms, sequenceid=66, compaction requested=false 2023-02-08 03:03:24,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 809a9d3a09a6df42a8f670f8902e0fee in 174ms, sequenceid=6, compaction requested=false 2023-02-08 03:03:24,197 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/.tmp/table/3e6af82d4823475cbdcabc622e456507 2023-02-08 03:03:24,198 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-02-08 03:03:24,200 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:24,200 DEBUG [M:0;jenkins-hbase12:41409] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: Waiting for close lock at 1675825404061Disabling compacts and flushes for region at 1675825404061Disabling writes for close at 1675825404062 (+1 ms)Obtaining lock to block concurrent updates at 1675825404063 (+1 ms)Preparing flush snapshotting stores in 1595e783b53d99cd5eef43b6debb2682 at 1675825404063Finished memstore snapshotting master:store,,1.1595e783b53d99cd5eef43b6debb2682., syncing WAL and waiting on mvcc, flushsize=dataSize=24669, getHeapSize=30280, getOffHeapSize=0, getCellsCount=71 at 1675825404063Flushing stores of master:store,,1.1595e783b53d99cd5eef43b6debb2682. at 1675825404065 (+2 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: creating writer at 1675825404066 (+1 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: appending metadata at 1675825404091 (+25 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: closing flushed file at 1675825404094 (+3 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: reopening flushed file at 1675825404179 (+85 ms)Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 134ms, sequenceid=66, compaction requested=false at 1675825404196 (+17 ms)Writing region close event to WAL at 1675825404200 (+4 ms)Closed at 1675825404200 2023-02-08 03:03:24,209 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:24,209 INFO [M:0;jenkins-hbase12:41409] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-02-08 03:03:24,210 INFO [M:0;jenkins-hbase12:41409] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:41409 2023-02-08 03:03:24,218 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/namespace/809a9d3a09a6df42a8f670f8902e0fee/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-02-08 03:03:24,219 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:24,219 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/.tmp/info/8b55113abe1948bd8d34561157bda0ff as hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/info/8b55113abe1948bd8d34561157bda0ff 2023-02-08 03:03:24,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 809a9d3a09a6df42a8f670f8902e0fee: Waiting for close lock at 1675825404019Running coprocessor pre-close hooks at 1675825404019Disabling compacts and flushes for region at 1675825404019Disabling writes for close at 1675825404020 (+1 ms)Obtaining lock to block concurrent updates at 1675825404022 (+2 ms)Preparing flush snapshotting stores in 809a9d3a09a6df42a8f670f8902e0fee at 1675825404022Finished memstore snapshotting hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee., syncing WAL and waiting on mvcc, flushsize=dataSize=78, getHeapSize=472, getOffHeapSize=0, getCellsCount=2 at 1675825404033 (+11 ms)Flushing stores of hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. at 1675825404035 (+2 ms)Flushing 809a9d3a09a6df42a8f670f8902e0fee/info: creating writer at 1675825404039 (+4 ms)Flushing 809a9d3a09a6df42a8f670f8902e0fee/info: appending metadata at 1675825404090 (+51 ms)Flushing 809a9d3a09a6df42a8f670f8902e0fee/info: closing flushed file at 1675825404094 (+4 ms)Flushing 809a9d3a09a6df42a8f670f8902e0fee/info: reopening flushed file at 1675825404179 (+85 ms)Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 809a9d3a09a6df42a8f670f8902e0fee in 174ms, sequenceid=6, compaction requested=false at 1675825404196 (+17 ms)Writing region close event to WAL at 1675825404209 (+13 ms)Running coprocessor post-close hooks at 1675825404218 (+9 ms)Closed at 1675825404219 (+1 ms) 2023-02-08 03:03:24,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1675825403040.809a9d3a09a6df42a8f670f8902e0fee. 2023-02-08 03:03:24,223 DEBUG [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1505): Waiting on 1588230740 2023-02-08 03:03:24,227 DEBUG [M:0;jenkins-hbase12:41409] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase12.apache.org,41409,1675825399690 already deleted, retry=false 2023-02-08 03:03:24,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/info/8b55113abe1948bd8d34561157bda0ff, entries=10, sequenceid=9, filesize=5.9 K 2023-02-08 03:03:24,235 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/.tmp/table/3e6af82d4823475cbdcabc622e456507 as hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/table/3e6af82d4823475cbdcabc622e456507 2023-02-08 03:03:24,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/table/3e6af82d4823475cbdcabc622e456507, entries=2, sequenceid=9, filesize=4.7 K 2023-02-08 03:03:24,245 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 223ms, sequenceid=9, compaction requested=false 2023-02-08 03:03:24,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-02-08 03:03:24,258 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-02-08 03:03:24,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-08 03:03:24,260 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:24,260 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 1588230740: Waiting for close lock at 1675825404022Running coprocessor pre-close hooks at 1675825404022Disabling compacts and flushes for region at 1675825404022Disabling writes for close at 1675825404022Obtaining lock to block concurrent updates at 1675825404022Preparing flush snapshotting stores in 1588230740 at 1675825404022Finished memstore snapshotting hbase:meta,,1.1588230740, syncing WAL and waiting on mvcc, flushsize=dataSize=1292, getHeapSize=2912, getOffHeapSize=0, getCellsCount=12 at 1675825404033 (+11 ms)Flushing stores of hbase:meta,,1.1588230740 at 1675825404035 (+2 ms)Flushing 1588230740/info: creating writer at 1675825404039 (+4 ms)Flushing 1588230740/info: appending metadata at 1675825404090 (+51 ms)Flushing 1588230740/info: closing flushed file at 1675825404094 (+4 ms)Flushing 1588230740/table: creating writer at 1675825404175 (+81 ms)Flushing 1588230740/table: appending metadata at 1675825404181 (+6 ms)Flushing 1588230740/table: closing flushed file at 1675825404181Flushing 1588230740/info: reopening flushed file at 1675825404220 (+39 ms)Flushing 1588230740/table: reopening flushed file at 1675825404236 (+16 ms)Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 223ms, sequenceid=9, compaction requested=false at 1675825404245 (+9 ms)Writing region close event to WAL at 1675825404252 (+7 ms)Running coprocessor post-close hooks at 1675825404259 (+7 ms)Closed at 1675825404260 (+1 ms) 2023-02-08 03:03:24,260 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:24,425 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,44017,1675825400849; all regions closed. 2023-02-08 03:03:24,440 DEBUG [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs 2023-02-08 03:03:24,440 INFO [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C44017%2C1675825400849.meta:.meta(num 1675825402729) 2023-02-08 03:03:24,448 DEBUG [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/oldWALs 2023-02-08 03:03:24,448 INFO [RS:0;jenkins-hbase12:44017] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C44017%2C1675825400849:(num 1675825402636) 2023-02-08 03:03:24,448 DEBUG [RS:0;jenkins-hbase12:44017] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:24,448 INFO [RS:0;jenkins-hbase12:44017] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:24,448 INFO [RS:0;jenkins-hbase12:44017] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:24,449 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:24,449 INFO [RS:0;jenkins-hbase12:44017] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:44017 2023-02-08 03:03:24,459 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,44017,1675825400849 2023-02-08 03:03:24,459 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@472533a8 rejected from java.util.concurrent.ThreadPoolExecutor@431aaf94[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 7] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,734 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,734 INFO [RS:0;jenkins-hbase12:44017] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,44017,1675825400849; zookeeper connection closed. 2023-02-08 03:03:24,734 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@59820850 rejected from java.util.concurrent.ThreadPoolExecutor@431aaf94[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 7] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,735 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c060da7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c060da7 2023-02-08 03:03:24,735 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:44017-0x10140860fd10001, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,736 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@21a4b7da rejected from java.util.concurrent.ThreadPoolExecutor@431aaf94[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 7] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,834 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,834 INFO [M:0;jenkins-hbase12:41409] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,41409,1675825399690; zookeeper connection closed. 2023-02-08 03:03:24,835 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@481d99 rejected from java.util.concurrent.ThreadPoolExecutor@7b472967[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 24] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,835 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): master:41409-0x10140860fd10000, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,835 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@62e921dc rejected from java.util.concurrent.ThreadPoolExecutor@7b472967[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 24] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,935 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,935 INFO [RS:1;jenkins-hbase12:45163] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,45163,1675825400901; zookeeper connection closed. 2023-02-08 03:03:24,935 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@218362f8 rejected from java.util.concurrent.ThreadPoolExecutor@51cabb6a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:24,936 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@364fad7e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@364fad7e 2023-02-08 03:03:24,936 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:45163-0x10140860fd10002, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:24,936 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1148bb7f rejected from java.util.concurrent.ThreadPoolExecutor@51cabb6a[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:25,035 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:25,036 INFO [RS:2;jenkins-hbase12:40931] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,40931,1675825400940; zookeeper connection closed. 2023-02-08 03:03:25,036 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@418a47 rejected from java.util.concurrent.ThreadPoolExecutor@a2e8700[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:25,037 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4fb2e865] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4fb2e865 2023-02-08 03:03:25,037 DEBUG [Listener at localhost.localdomain/42545-EventThread] zookeeper.ZKWatcher(600): regionserver:40931-0x10140860fd10003, quorum=127.0.0.1:65121, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:25,037 ERROR [Listener at localhost.localdomain/42545-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2165924e rejected from java.util.concurrent.ThreadPoolExecutor@a2e8700[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:25,037 INFO [Listener at localhost.localdomain/42545] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-02-08 03:03:25,040 WARN [Listener at localhost.localdomain/42545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:25,142 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:25,252 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-08 03:03:25,252 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1590307008-136.243.104.168-1675825395009 (Datanode Uuid b128cda4-c4d3-4954-88b2-0e89dc22df49) service to localhost.localdomain/127.0.0.1:41189 2023-02-08 03:03:25,256 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data5/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,256 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data6/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,261 WARN [Listener at localhost.localdomain/42545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:25,265 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:25,366 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-08 03:03:25,366 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1590307008-136.243.104.168-1675825395009 (Datanode Uuid a97f1446-8120-4937-8ae2-b8820660b183) service to localhost.localdomain/127.0.0.1:41189 2023-02-08 03:03:25,366 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data3/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,367 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data4/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,378 WARN [Listener at localhost.localdomain/42545] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:25,380 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:25,482 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-08 03:03:25,483 WARN [BP-1590307008-136.243.104.168-1675825395009 heartbeating to localhost.localdomain/127.0.0.1:41189] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1590307008-136.243.104.168-1675825395009 (Datanode Uuid b3a0df47-9464-4fb4-a2ea-e973fa7789df) service to localhost.localdomain/127.0.0.1:41189 2023-02-08 03:03:25,484 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data1/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,485 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/cluster_abcfba1f-57f2-30b9-4464-39a5faadee22/dfs/data/data2/current/BP-1590307008-136.243.104.168-1675825395009] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:25,514 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-02-08 03:03:25,637 INFO [Listener at localhost.localdomain/42545] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-02-08 03:03:25,672 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-02-08 03:03:25,686 INFO [Listener at localhost.localdomain/42545] hbase.ResourceChecker(175): after: client.TestAsyncClusterAdminApi2#testStop Thread=78 (was 8) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: ForkJoinPool-2-worker-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-4-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-7-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:41189 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-6-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Monitor thread for TaskMonitor java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:300) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-2-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-7-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'NameNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: region-location-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase12:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SnapshotHandlerChoreCleaner sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-2 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@365ed6ca java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Idle-Rpc-Conn-Sweeper-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: nioEventLoopGroup-3-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins.hfs.1@localhost.localdomain:41189 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-1-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-3-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-1-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcClient-timer-pool-0 java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:600) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:496) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-7-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:41189 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase12:0.procedureResultReporter sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-4-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase12:0:becomeActiveMaster-MemStoreChunkPool Statistics sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: SessionTracker java.lang.Thread.sleep(Native Method) org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:151) Potentially hanging thread: nioEventLoopGroup-6-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-2-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:41189 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:41189 from jenkins.hfs.0 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:41189 from jenkins.hfs.1 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3693) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.2@localhost.localdomain:41189 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/42545 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-5-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-5-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.0@localhost.localdomain:41189 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-6-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:41189 from jenkins.hfs.2 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=501 (was 260) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=383 (was 390), ProcessCount=172 (was 171) - ProcessCount LEAK? -, AvailableMemoryMB=2996 (was 3571) 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.ResourceChecker(147): before: client.TestAsyncClusterAdminApi2#testShutdown Thread=78, OpenFileDescriptor=501, MaxFileDescriptor=60000, SystemLoadAverage=383, ProcessCount=172, AvailableMemoryMB=2996 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/hadoop.log.dir so I do NOT create it in target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/2cbb7743-0e24-0498-82de-0c52bf2b6fac/hadoop.tmp.dir so I do NOT create it in target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630, deleteOnExit=true 2023-02-08 03:03:25,699 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/test.cache.data in system properties and HBase conf 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/hadoop.tmp.dir in system properties and HBase conf 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/hadoop.log.dir in system properties and HBase conf 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/mapreduce.cluster.local.dir in system properties and HBase conf 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-02-08 03:03:25,700 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-02-08 03:03:25,700 DEBUG [Listener at localhost.localdomain/42545] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-02-08 03:03:25,701 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/nfs.dump.dir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/java.io.tmpdir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/dfs.journalnode.edits.dir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-02-08 03:03:25,702 INFO [Listener at localhost.localdomain/42545] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-02-08 03:03:25,705 WARN [Listener at localhost.localdomain/42545] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-08 03:03:25,705 WARN [Listener at localhost.localdomain/42545] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-08 03:03:26,077 WARN [Listener at localhost.localdomain/42545] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:26,079 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:26,084 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/java.io.tmpdir/Jetty_localhost_localdomain_41601_hdfs____.hukc24/webapp 2023-02-08 03:03:26,162 INFO [Listener at localhost.localdomain/42545] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:41601 2023-02-08 03:03:26,165 WARN [Listener at localhost.localdomain/42545] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-02-08 03:03:26,165 WARN [Listener at localhost.localdomain/42545] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-02-08 03:03:26,388 WARN [Listener at localhost.localdomain/36579] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:26,401 WARN [Listener at localhost.localdomain/36579] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:26,404 WARN [Listener at localhost.localdomain/36579] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:26,405 INFO [Listener at localhost.localdomain/36579] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:26,412 INFO [Listener at localhost.localdomain/36579] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/java.io.tmpdir/Jetty_localhost_34089_datanode____.bify09/webapp 2023-02-08 03:03:26,490 INFO [Listener at localhost.localdomain/36579] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34089 2023-02-08 03:03:26,496 WARN [Listener at localhost.localdomain/44107] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:26,511 WARN [Listener at localhost.localdomain/44107] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:26,514 WARN [Listener at localhost.localdomain/44107] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:26,515 INFO [Listener at localhost.localdomain/44107] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:26,518 INFO [Listener at localhost.localdomain/44107] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/java.io.tmpdir/Jetty_localhost_42857_datanode____.m4fjp9/webapp 2023-02-08 03:03:26,591 INFO [Listener at localhost.localdomain/44107] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42857 2023-02-08 03:03:26,600 WARN [Listener at localhost.localdomain/40261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:26,613 WARN [Listener at localhost.localdomain/40261] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-02-08 03:03:26,615 WARN [Listener at localhost.localdomain/40261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-02-08 03:03:26,617 INFO [Listener at localhost.localdomain/40261] log.Slf4jLog(67): jetty-6.1.26 2023-02-08 03:03:26,621 INFO [Listener at localhost.localdomain/40261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/java.io.tmpdir/Jetty_localhost_32901_datanode____.jomese/webapp 2023-02-08 03:03:26,693 INFO [Listener at localhost.localdomain/40261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32901 2023-02-08 03:03:26,701 WARN [Listener at localhost.localdomain/37527] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-02-08 03:03:28,090 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd76789812aeb426: Processing first storage report for DS-4574e323-da94-446e-9ccd-ef1806a2175d from datanode d9d7d489-11aa-4b25-8e2a-2ac51e0c1dcb 2023-02-08 03:03:28,090 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd76789812aeb426: from storage DS-4574e323-da94-446e-9ccd-ef1806a2175d node DatanodeRegistration(127.0.0.1:36761, datanodeUuid=d9d7d489-11aa-4b25-8e2a-2ac51e0c1dcb, infoPort=45679, infoSecurePort=0, ipcPort=44107, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,090 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfd76789812aeb426: Processing first storage report for DS-26f8646e-b2bc-4ea3-a99b-f9bd2d397245 from datanode d9d7d489-11aa-4b25-8e2a-2ac51e0c1dcb 2023-02-08 03:03:28,090 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfd76789812aeb426: from storage DS-26f8646e-b2bc-4ea3-a99b-f9bd2d397245 node DatanodeRegistration(127.0.0.1:36761, datanodeUuid=d9d7d489-11aa-4b25-8e2a-2ac51e0c1dcb, infoPort=45679, infoSecurePort=0, ipcPort=44107, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,301 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-02-08 03:03:28,427 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa66c588467cd0ee7: Processing first storage report for DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17 from datanode 11132927-5840-4328-a9b7-ccc80d0d7777 2023-02-08 03:03:28,427 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa66c588467cd0ee7: from storage DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17 node DatanodeRegistration(127.0.0.1:40521, datanodeUuid=11132927-5840-4328-a9b7-ccc80d0d7777, infoPort=44797, infoSecurePort=0, ipcPort=40261, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,427 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa66c588467cd0ee7: Processing first storage report for DS-ceff2ab6-ca28-4d64-a92e-c2cecb17ec7b from datanode 11132927-5840-4328-a9b7-ccc80d0d7777 2023-02-08 03:03:28,427 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa66c588467cd0ee7: from storage DS-ceff2ab6-ca28-4d64-a92e-c2cecb17ec7b node DatanodeRegistration(127.0.0.1:40521, datanodeUuid=11132927-5840-4328-a9b7-ccc80d0d7777, infoPort=44797, infoSecurePort=0, ipcPort=40261, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x374152a8f3676ab0: Processing first storage report for DS-eafb4b27-e128-45a5-8234-0575db08e903 from datanode 4e4ad012-0018-4116-9af1-a1aab5ae3b4a 2023-02-08 03:03:28,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x374152a8f3676ab0: from storage DS-eafb4b27-e128-45a5-8234-0575db08e903 node DatanodeRegistration(127.0.0.1:45923, datanodeUuid=4e4ad012-0018-4116-9af1-a1aab5ae3b4a, infoPort=33135, infoSecurePort=0, ipcPort=37527, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,480 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x374152a8f3676ab0: Processing first storage report for DS-3bb76638-5412-42a3-b178-8807d1598c81 from datanode 4e4ad012-0018-4116-9af1-a1aab5ae3b4a 2023-02-08 03:03:28,480 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x374152a8f3676ab0: from storage DS-3bb76638-5412-42a3-b178-8807d1598c81 node DatanodeRegistration(127.0.0.1:45923, datanodeUuid=4e4ad012-0018-4116-9af1-a1aab5ae3b4a, infoPort=33135, infoSecurePort=0, ipcPort=37527, storageInfo=lv=-57;cid=testClusterID;nsid=1336060385;c=1675825405707), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-02-08 03:03:28,534 DEBUG [Listener at localhost.localdomain/37527] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91 2023-02-08 03:03:28,538 INFO [Listener at localhost.localdomain/37527] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/zookeeper_0, clientPort=58596, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-02-08 03:03:28,540 INFO [Listener at localhost.localdomain/37527] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58596 2023-02-08 03:03:28,540 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,541 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,563 INFO [Listener at localhost.localdomain/37527] util.FSUtils(479): Created version file at hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257 with version=8 2023-02-08 03:03:28,563 INFO [Listener at localhost.localdomain/37527] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:41189/user/jenkins/test-data/11b1c2ee-66d5-00e7-851b-f692d8907916/hbase-staging 2023-02-08 03:03:28,565 INFO [Listener at localhost.localdomain/37527] client.ConnectionUtils(127): master/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:28,565 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,565 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,565 INFO [Listener at localhost.localdomain/37527] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:28,566 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,566 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:28,566 INFO [Listener at localhost.localdomain/37527] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:28,568 INFO [Listener at localhost.localdomain/37527] ipc.NettyRpcServer(120): Bind to /136.243.104.168:42925 2023-02-08 03:03:28,569 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,570 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,571 INFO [Listener at localhost.localdomain/37527] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42925 connecting to ZooKeeper ensemble=127.0.0.1:58596 2023-02-08 03:03:28,635 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:429250x0, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:28,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): master:42925-0x101408635840000 connected 2023-02-08 03:03:28,739 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:28,740 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:28,741 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:28,741 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42925 2023-02-08 03:03:28,742 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42925 2023-02-08 03:03:28,742 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42925 2023-02-08 03:03:28,742 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42925 2023-02-08 03:03:28,743 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42925 2023-02-08 03:03:28,743 INFO [Listener at localhost.localdomain/37527] master.HMaster(439): hbase.rootdir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257, hbase.cluster.distributed=false 2023-02-08 03:03:28,760 INFO [Listener at localhost.localdomain/37527] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:28,760 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,760 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,760 INFO [Listener at localhost.localdomain/37527] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:28,760 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,761 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:28,761 INFO [Listener at localhost.localdomain/37527] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:28,762 INFO [Listener at localhost.localdomain/37527] ipc.NettyRpcServer(120): Bind to /136.243.104.168:43487 2023-02-08 03:03:28,763 INFO [Listener at localhost.localdomain/37527] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:28,764 DEBUG [Listener at localhost.localdomain/37527] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:28,765 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,766 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,767 INFO [Listener at localhost.localdomain/37527] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43487 connecting to ZooKeeper ensemble=127.0.0.1:58596 2023-02-08 03:03:28,780 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:434870x0, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:28,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:43487-0x101408635840001 connected 2023-02-08 03:03:28,782 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:28,783 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:28,784 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:28,784 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43487 2023-02-08 03:03:28,784 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43487 2023-02-08 03:03:28,785 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43487 2023-02-08 03:03:28,785 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43487 2023-02-08 03:03:28,786 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43487 2023-02-08 03:03:28,796 INFO [Listener at localhost.localdomain/37527] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:28,797 INFO [Listener at localhost.localdomain/37527] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:28,799 INFO [Listener at localhost.localdomain/37527] ipc.NettyRpcServer(120): Bind to /136.243.104.168:41267 2023-02-08 03:03:28,799 INFO [Listener at localhost.localdomain/37527] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:28,801 DEBUG [Listener at localhost.localdomain/37527] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:28,802 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,803 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,804 INFO [Listener at localhost.localdomain/37527] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41267 connecting to ZooKeeper ensemble=127.0.0.1:58596 2023-02-08 03:03:28,818 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:412670x0, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:28,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:41267-0x101408635840002 connected 2023-02-08 03:03:28,819 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:28,820 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:28,820 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:28,821 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41267 2023-02-08 03:03:28,821 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41267 2023-02-08 03:03:28,821 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41267 2023-02-08 03:03:28,821 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41267 2023-02-08 03:03:28,822 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41267 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] client.ConnectionUtils(127): regionserver/jenkins-hbase12:0 server-side Connection retries=6 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RWQueueRpcExecutor(105): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-02-08 03:03:28,832 INFO [Listener at localhost.localdomain/37527] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-02-08 03:03:28,834 INFO [Listener at localhost.localdomain/37527] ipc.NettyRpcServer(120): Bind to /136.243.104.168:37951 2023-02-08 03:03:28,834 INFO [Listener at localhost.localdomain/37527] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-02-08 03:03:28,835 DEBUG [Listener at localhost.localdomain/37527] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-02-08 03:03:28,836 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,837 INFO [Listener at localhost.localdomain/37527] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,838 INFO [Listener at localhost.localdomain/37527] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37951 connecting to ZooKeeper ensemble=127.0.0.1:58596 2023-02-08 03:03:28,849 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:379510x0, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-02-08 03:03:28,850 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:379510x0, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:28,850 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(623): regionserver:37951-0x101408635840003 connected 2023-02-08 03:03:28,851 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:28,851 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ZKUtil(164): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-02-08 03:03:28,852 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37951 2023-02-08 03:03:28,852 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37951 2023-02-08 03:03:28,852 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37951 2023-02-08 03:03:28,853 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37951 2023-02-08 03:03:28,853 DEBUG [Listener at localhost.localdomain/37527] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37951 2023-02-08 03:03:28,856 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2158): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:28,864 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-08 03:03:28,865 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:28,875 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:28,875 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:28,875 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:28,875 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:28,875 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:28,878 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-08 03:03:28,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-02-08 03:03:28,880 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(224): Deleting ZNode for /hbase/backup-masters/jenkins-hbase12.apache.org,42925,1675825408564 from backup master directory 2023-02-08 03:03:28,891 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:28,891 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-02-08 03:03:28,891 WARN [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:28,891 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ActiveMasterManager(234): Registered as active master=jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:28,915 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] util.FSUtils(628): Created cluster ID file at hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/hbase.id with ID: 2ce2432d-d456-4094-a65c-0ca5daf194d4 2023-02-08 03:03:28,931 INFO [master/jenkins-hbase12:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-02-08 03:03:28,944 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:28,958 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x39df3ffe to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:28,971 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62f0ecfb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:28,971 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:28,971 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-02-08 03:03:28,972 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:28,974 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7689): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store-tmp 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-08 03:03:28,991 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:28,991 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:28,991 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: Waiting for close lock at 1675825408991Disabling compacts and flushes for region at 1675825408991Disabling writes for close at 1675825408991Writing region close event to WAL at 1675825408991Closed at 1675825408991 2023-02-08 03:03:28,992 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/WALs/jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:28,996 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C42925%2C1675825408564, suffix=, logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/WALs/jenkins-hbase12.apache.org,42925,1675825408564, archiveDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/oldWALs, maxLogs=10 2023-02-08 03:03:29,013 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK] 2023-02-08 03:03:29,015 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK] 2023-02-08 03:03:29,015 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK] 2023-02-08 03:03:29,018 INFO [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/WALs/jenkins-hbase12.apache.org,42925,1675825408564/jenkins-hbase12.apache.org%2C42925%2C1675825408564.1675825408996 2023-02-08 03:03:29,019 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK], DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK], DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK]] 2023-02-08 03:03:29,019 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7850): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:29,019 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(865): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:29,019 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7890): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,019 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(7893): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,022 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,024 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-02-08 03:03:29,024 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-02-08 03:03:29,025 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,026 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,027 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,030 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1054): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-02-08 03:03:29,032 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:29,033 INFO [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(1071): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=63694835, jitterRate=-0.05087299644947052}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-08 03:03:29,033 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] regionserver.HRegion(964): Region open journal for 1595e783b53d99cd5eef43b6debb2682: Writing region info on filesystem at 1675825409019Initializing all the Stores at 1675825409021 (+2 ms)Instantiating store for column family {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825409021Cleaning up temporary data from old regions at 1675825409028 (+7 ms)Cleaning up detritus from prior splits at 1675825409028Region opened successfully at 1675825409033 (+5 ms) 2023-02-08 03:03:29,034 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-02-08 03:03:29,035 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-02-08 03:03:29,035 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-02-08 03:03:29,035 INFO [master/jenkins-hbase12:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-02-08 03:03:29,036 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-02-08 03:03:29,036 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-02-08 03:03:29,036 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(95): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-02-08 03:03:29,039 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-02-08 03:03:29,040 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-02-08 03:03:29,053 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-02-08 03:03:29,054 INFO [master/jenkins-hbase12:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-02-08 03:03:29,054 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-02-08 03:03:29,054 INFO [master/jenkins-hbase12:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-02-08 03:03:29,055 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-02-08 03:03:29,064 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:29,065 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-02-08 03:03:29,066 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-02-08 03:03:29,067 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-02-08 03:03:29,075 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:29,075 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:29,075 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:29,075 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:29,075 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:29,075 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(739): Active/primary master=jenkins-hbase12.apache.org,42925,1675825408564, sessionid=0x101408635840000, setting cluster-up flag (Was=false) 2023-02-08 03:03:29,096 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:29,128 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-02-08 03:03:29,131 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:29,155 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:29,191 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-02-08 03:03:29,196 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:29,198 WARN [master/jenkins-hbase12:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.hbase-snapshot/.tmp 2023-02-08 03:03:29,203 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:29,203 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=5, maxPoolSize=5 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase12:0, corePoolSize=10, maxPoolSize=10 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:29,204 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,205 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1675825439205 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-02-08 03:03:29,206 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,207 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-02-08 03:03:29,207 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:29,207 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-02-08 03:03:29,207 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-02-08 03:03:29,207 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-02-08 03:03:29,207 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-02-08 03:03:29,207 INFO [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-02-08 03:03:29,208 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825409207,5,FailOnTimeoutGroup] 2023-02-08 03:03:29,208 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825409208,5,FailOnTimeoutGroup] 2023-02-08 03:03:29,208 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,208 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1451): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-02-08 03:03:29,208 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,208 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,208 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:29,226 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:29,227 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:29,227 INFO [PEWorker-1] regionserver.HRegion(7671): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257 2023-02-08 03:03:29,239 DEBUG [PEWorker-1] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:29,241 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-08 03:03:29,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/info 2023-02-08 03:03:29,244 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-08 03:03:29,245 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,245 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-08 03:03:29,246 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:29,247 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-08 03:03:29,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,247 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-08 03:03:29,249 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/table 2023-02-08 03:03:29,249 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-08 03:03:29,250 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,251 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740 2023-02-08 03:03:29,252 DEBUG [PEWorker-1] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740 2023-02-08 03:03:29,254 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-08 03:03:29,255 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(952): ClusterId : 2ce2432d-d456-4094-a65c-0ca5daf194d4 2023-02-08 03:03:29,255 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(952): ClusterId : 2ce2432d-d456-4094-a65c-0ca5daf194d4 2023-02-08 03:03:29,256 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:29,255 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(952): ClusterId : 2ce2432d-d456-4094-a65c-0ca5daf194d4 2023-02-08 03:03:29,257 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:29,258 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-02-08 03:03:29,259 DEBUG [PEWorker-1] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-08 03:03:29,262 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:29,263 INFO [PEWorker-1] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=68300371, jitterRate=0.01775483787059784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-08 03:03:29,263 DEBUG [PEWorker-1] regionserver.HRegion(964): Region open journal for 1588230740: Writing region info on filesystem at 1675825409239Initializing all the Stores at 1675825409240 (+1 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825409240Instantiating store for column family {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825409241 (+1 ms)Instantiating store for column family {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825409241Cleaning up temporary data from old regions at 1675825409253 (+12 ms)Cleaning up detritus from prior splits at 1675825409253Region opened successfully at 1675825409263 (+10 ms) 2023-02-08 03:03:29,263 DEBUG [PEWorker-1] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-08 03:03:29,263 INFO [PEWorker-1] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-08 03:03:29,263 DEBUG [PEWorker-1] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-08 03:03:29,263 DEBUG [PEWorker-1] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-08 03:03:29,263 DEBUG [PEWorker-1] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-08 03:03:29,264 INFO [PEWorker-1] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:29,264 DEBUG [PEWorker-1] regionserver.HRegion(1557): Region close journal for 1588230740: Waiting for close lock at 1675825409263Disabling compacts and flushes for region at 1675825409263Disabling writes for close at 1675825409263Writing region close event to WAL at 1675825409264 (+1 ms)Closed at 1675825409264 2023-02-08 03:03:29,265 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-02-08 03:03:29,265 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-02-08 03:03:29,265 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-02-08 03:03:29,268 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-02-08 03:03:29,269 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-02-08 03:03:29,281 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:29,281 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:29,282 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:29,282 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-02-08 03:03:29,282 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:29,282 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-02-08 03:03:29,303 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:29,305 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:29,305 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-02-08 03:03:29,308 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ReadOnlyZKClient(139): Connect 0x73736a8b to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:29,308 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ReadOnlyZKClient(139): Connect 0x7f55c12b to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:29,308 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ReadOnlyZKClient(139): Connect 0x5cb2011f to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:29,329 DEBUG [RS:0;jenkins-hbase12:43487] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@774cf03d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:29,329 DEBUG [RS:2;jenkins-hbase12:37951] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5275f4f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:29,330 DEBUG [RS:0;jenkins-hbase12:43487] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@324e60b0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:29,330 DEBUG [RS:1;jenkins-hbase12:41267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@719230b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:29,330 DEBUG [RS:2;jenkins-hbase12:37951] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1348fec2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:29,331 DEBUG [RS:1;jenkins-hbase12:41267] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30af3a8a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:29,340 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase12:43487 2023-02-08 03:03:29,340 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase12:37951 2023-02-08 03:03:29,340 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase12:41267 2023-02-08 03:03:29,340 INFO [RS:0;jenkins-hbase12:43487] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:29,340 INFO [RS:0;jenkins-hbase12:43487] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:29,340 INFO [RS:1;jenkins-hbase12:41267] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:29,340 INFO [RS:2;jenkins-hbase12:37951] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-02-08 03:03:29,341 INFO [RS:2;jenkins-hbase12:37951] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:29,340 INFO [RS:1;jenkins-hbase12:41267] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-02-08 03:03:29,340 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:29,341 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:29,341 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1023): About to register with Master. 2023-02-08 03:03:29,341 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,42925,1675825408564 with isa=jenkins-hbase12.apache.org/136.243.104.168:41267, startcode=1675825408796 2023-02-08 03:03:29,341 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,42925,1675825408564 with isa=jenkins-hbase12.apache.org/136.243.104.168:43487, startcode=1675825408760 2023-02-08 03:03:29,341 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(2810): reportForDuty to master=jenkins-hbase12.apache.org,42925,1675825408564 with isa=jenkins-hbase12.apache.org/136.243.104.168:37951, startcode=1675825408831 2023-02-08 03:03:29,342 DEBUG [RS:2;jenkins-hbase12:37951] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:29,342 DEBUG [RS:0;jenkins-hbase12:43487] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:29,342 DEBUG [RS:1;jenkins-hbase12:41267] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-02-08 03:03:29,345 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:60589, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:29,345 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:59919, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:29,345 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:45483, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-02-08 03:03:29,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42925] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,346 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42925] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,347 INFO [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42925] master.ServerManager(394): Registering regionserver=jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,347 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257 2023-02-08 03:03:29,347 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36579 2023-02-08 03:03:29,347 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:29,348 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257 2023-02-08 03:03:29,348 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36579 2023-02-08 03:03:29,348 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:29,348 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1596): Config from master: hbase.rootdir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257 2023-02-08 03:03:29,349 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1596): Config from master: fs.defaultFS=hdfs://localhost.localdomain:36579 2023-02-08 03:03:29,349 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1596): Config from master: hbase.master.info.port=-1 2023-02-08 03:03:29,359 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:29,399 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,399 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,41267,1675825408796] 2023-02-08 03:03:29,399 WARN [RS:1;jenkins-hbase12:41267] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:29,400 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ZKUtil(162): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,400 INFO [RS:1;jenkins-hbase12:41267] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:29,399 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,399 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,43487,1675825408760] 2023-02-08 03:03:29,400 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,400 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase12.apache.org,37951,1675825408831] 2023-02-08 03:03:29,400 WARN [RS:2;jenkins-hbase12:37951] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:29,400 WARN [RS:0;jenkins-hbase12:43487] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-02-08 03:03:29,400 INFO [RS:2;jenkins-hbase12:37951] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:29,400 INFO [RS:0;jenkins-hbase12:43487] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:29,400 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,400 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1947): logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,408 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ZKUtil(162): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,408 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,408 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,408 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ZKUtil(162): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,409 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,409 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,409 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ZKUtil(162): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,409 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,410 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,410 DEBUG [RS:0;jenkins-hbase12:43487] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:29,411 DEBUG [RS:2;jenkins-hbase12:37951] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:29,411 INFO [RS:0;jenkins-hbase12:43487] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:29,413 INFO [RS:0;jenkins-hbase12:43487] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:29,413 INFO [RS:2;jenkins-hbase12:37951] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:29,413 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-02-08 03:03:29,415 INFO [RS:1;jenkins-hbase12:41267] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-02-08 03:03:29,417 INFO [RS:0;jenkins-hbase12:43487] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:29,417 INFO [RS:1;jenkins-hbase12:41267] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:29,417 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,420 INFO [RS:2;jenkins-hbase12:37951] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-02-08 03:03:29,420 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:29,420 DEBUG [jenkins-hbase12:42925] assignment.AssignmentManager(2178): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-02-08 03:03:29,421 INFO [RS:1;jenkins-hbase12:41267] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:29,421 INFO [RS:2;jenkins-hbase12:37951] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-02-08 03:03:29,421 DEBUG [jenkins-hbase12:42925] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-08 03:03:29,421 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,421 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,423 DEBUG [jenkins-hbase12:42925] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-08 03:03:29,423 DEBUG [jenkins-hbase12:42925] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-08 03:03:29,423 DEBUG [jenkins-hbase12:42925] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-08 03:03:29,423 DEBUG [jenkins-hbase12:42925] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-08 03:03:29,426 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,426 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,41267,1675825408796, state=OPENING 2023-02-08 03:03:29,426 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:29,426 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer$CompactionChecker(1838): CompactionChecker runs every PT1S 2023-02-08 03:03:29,427 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,427 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,427 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,427 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,427 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,428 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:29,428 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,428 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,428 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,428 DEBUG [RS:0;jenkins-hbase12:43487] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,428 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,433 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,433 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,433 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,433 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:29,433 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase12:0, corePoolSize=2, maxPoolSize=2 2023-02-08 03:03:29,434 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:2;jenkins-hbase12:37951] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,434 DEBUG [RS:1;jenkins-hbase12:41267] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase12:0, corePoolSize=1, maxPoolSize=1 2023-02-08 03:03:29,436 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,436 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,437 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,441 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,441 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,441 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,443 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-02-08 03:03:29,444 INFO [RS:0;jenkins-hbase12:43487] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:29,444 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,43487,1675825408760-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,453 INFO [RS:2;jenkins-hbase12:37951] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:29,453 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,37951,1675825408831-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,453 INFO [RS:0;jenkins-hbase12:43487] regionserver.Replication(203): jenkins-hbase12.apache.org,43487,1675825408760 started 2023-02-08 03:03:29,454 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,43487,1675825408760, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:43487, sessionid=0x101408635840001 2023-02-08 03:03:29,454 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:29,454 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:29,454 DEBUG [RS:0;jenkins-hbase12:43487] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,454 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,43487,1675825408760' 2023-02-08 03:03:29,454 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:29,454 INFO [RS:1;jenkins-hbase12:41267] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-02-08 03:03:29,454 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,41267,1675825408796}] 2023-02-08 03:03:29,455 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,41267,1675825408796-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,455 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-08 03:03:29,455 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,43487,1675825408760' 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:29,456 DEBUG [RS:0;jenkins-hbase12:43487] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:29,457 DEBUG [RS:0;jenkins-hbase12:43487] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:29,457 INFO [RS:0;jenkins-hbase12:43487] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:29,457 INFO [RS:0;jenkins-hbase12:43487] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:29,470 INFO [RS:1;jenkins-hbase12:41267] regionserver.Replication(203): jenkins-hbase12.apache.org,41267,1675825408796 started 2023-02-08 03:03:29,470 INFO [RS:2;jenkins-hbase12:37951] regionserver.Replication(203): jenkins-hbase12.apache.org,37951,1675825408831 started 2023-02-08 03:03:29,470 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,37951,1675825408831, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:37951, sessionid=0x101408635840003 2023-02-08 03:03:29,470 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1638): Serving as jenkins-hbase12.apache.org,41267,1675825408796, RpcServer on jenkins-hbase12.apache.org/136.243.104.168:41267, sessionid=0x101408635840002 2023-02-08 03:03:29,470 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:29,470 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-02-08 03:03:29,470 DEBUG [RS:1;jenkins-hbase12:41267] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,470 DEBUG [RS:2;jenkins-hbase12:37951] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,470 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,37951,1675825408831' 2023-02-08 03:03:29,471 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:29,470 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,41267,1675825408796' 2023-02-08 03:03:29,471 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-02-08 03:03:29,471 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:29,471 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,41267,1675825408796' 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase12.apache.org,37951,1675825408831' 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-02-08 03:03:29,472 DEBUG [RS:1;jenkins-hbase12:41267] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:29,472 INFO [RS:1;jenkins-hbase12:41267] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:29,472 INFO [RS:1;jenkins-hbase12:41267] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:29,472 DEBUG [RS:2;jenkins-hbase12:37951] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-02-08 03:03:29,472 INFO [RS:2;jenkins-hbase12:37951] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-02-08 03:03:29,472 INFO [RS:2;jenkins-hbase12:37951] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-02-08 03:03:29,560 INFO [RS:0;jenkins-hbase12:43487] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C43487%2C1675825408760, suffix=, logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,43487,1675825408760, archiveDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs, maxLogs=32 2023-02-08 03:03:29,577 INFO [RS:2;jenkins-hbase12:37951] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C37951%2C1675825408831, suffix=, logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,37951,1675825408831, archiveDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs, maxLogs=32 2023-02-08 03:03:29,577 INFO [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C41267%2C1675825408796, suffix=, logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,41267,1675825408796, archiveDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs, maxLogs=32 2023-02-08 03:03:29,587 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK] 2023-02-08 03:03:29,587 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK] 2023-02-08 03:03:29,587 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK] 2023-02-08 03:03:29,592 INFO [RS:0;jenkins-hbase12:43487] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,43487,1675825408760/jenkins-hbase12.apache.org%2C43487%2C1675825408760.1675825409561 2023-02-08 03:03:29,593 DEBUG [RS:0;jenkins-hbase12:43487] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK], DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK], DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK]] 2023-02-08 03:03:29,599 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK] 2023-02-08 03:03:29,599 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK] 2023-02-08 03:03:29,599 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK] 2023-02-08 03:03:29,602 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK] 2023-02-08 03:03:29,602 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK] 2023-02-08 03:03:29,602 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK] 2023-02-08 03:03:29,606 INFO [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,41267,1675825408796/jenkins-hbase12.apache.org%2C41267%2C1675825408796.1675825409579 2023-02-08 03:03:29,608 DEBUG [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK], DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK], DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK]] 2023-02-08 03:03:29,610 INFO [RS:2;jenkins-hbase12:37951] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,37951,1675825408831/jenkins-hbase12.apache.org%2C37951%2C1675825408831.1675825409580 2023-02-08 03:03:29,613 DEBUG [RS:2;jenkins-hbase12:37951] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK], DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK], DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK]] 2023-02-08 03:03:29,614 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:29,614 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-08 03:03:29,615 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43846, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-08 03:03:29,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:meta,,1.1588230740 2023-02-08 03:03:29,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-02-08 03:03:29,621 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(464): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase12.apache.org%2C41267%2C1675825408796.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,41267,1675825408796, archiveDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs, maxLogs=32 2023-02-08 03:03:29,635 DEBUG [RS-EventLoopGroup-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK] 2023-02-08 03:03:29,636 DEBUG [RS-EventLoopGroup-10-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK] 2023-02-08 03:03:29,636 DEBUG [RS-EventLoopGroup-10-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK] 2023-02-08 03:03:29,639 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(758): New WAL /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,41267,1675825408796/jenkins-hbase12.apache.org%2C41267%2C1675825408796.meta.1675825409622.meta 2023-02-08 03:03:29,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] wal.AbstractFSWAL(839): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45923,DS-eafb4b27-e128-45a5-8234-0575db08e903,DISK], DatanodeInfoWithStorage[127.0.0.1:40521,DS-d674dec8-730d-46ce-b7f6-9e3cd05eae17,DISK], DatanodeInfoWithStorage[127.0.0.1:36761,DS-4574e323-da94-446e-9ccd-ef1806a2175d,DISK]] 2023-02-08 03:03:29,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:29,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-02-08 03:03:29,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(8546): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-02-08 03:03:29,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-02-08 03:03:29,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-02-08 03:03:29,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:29,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 1588230740 2023-02-08 03:03:29,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 1588230740 2023-02-08 03:03:29,643 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-02-08 03:03:29,644 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/info 2023-02-08 03:03:29,644 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/info 2023-02-08 03:03:29,645 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-02-08 03:03:29,645 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,645 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-02-08 03:03:29,646 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:29,646 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/rep_barrier 2023-02-08 03:03:29,647 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-02-08 03:03:29,647 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,647 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-02-08 03:03:29,648 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/table 2023-02-08 03:03:29,648 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/table 2023-02-08 03:03:29,649 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-02-08 03:03:29,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:29,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740 2023-02-08 03:03:29,653 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740 2023-02-08 03:03:29,657 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-02-08 03:03:29,660 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 1588230740 2023-02-08 03:03:29,661 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=61437844, jitterRate=-0.08450478315353394}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-02-08 03:03:29,661 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 1588230740: Running coprocessor pre-open hook at 1675825409641Writing region info on filesystem at 1675825409641Initializing all the Stores at 1675825409642 (+1 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825409642Instantiating store for column family {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at 1675825409643 (+1 ms)Instantiating store for column family {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825409643Cleaning up temporary data from old regions at 1675825409654 (+11 ms)Cleaning up detritus from prior splits at 1675825409656 (+2 ms)Running coprocessor post-open hooks at 1675825409661 (+5 ms)Region opened successfully at 1675825409661 2023-02-08 03:03:29,663 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1675825409614 2023-02-08 03:03:29,668 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:meta,,1.1588230740 2023-02-08 03:03:29,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:meta,,1.1588230740 2023-02-08 03:03:29,669 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase12.apache.org,41267,1675825408796, state=OPEN 2023-02-08 03:03:29,681 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-02-08 03:03:29,681 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-02-08 03:03:29,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-02-08 03:03:29,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase12.apache.org,41267,1675825408796 in 227 msec 2023-02-08 03:03:29,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-02-08 03:03:29,687 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 419 msec 2023-02-08 03:03:29,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 487 msec 2023-02-08 03:03:29,690 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(946): Master startup: status=Wait for region servers to report in, state=RUNNING, startTime=1675825408865, completionTime=-1 2023-02-08 03:03:29,690 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-02-08 03:03:29,690 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1519): Joining cluster... 2023-02-08 03:03:29,693 DEBUG [hconnection-0x207367e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-08 03:03:29,695 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43850, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-08 03:03:29,698 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1531): Number of RegionServers=3 2023-02-08 03:03:29,698 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1675825469698 2023-02-08 03:03:29,698 INFO [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1675825529698 2023-02-08 03:03:29,698 INFO [master/jenkins-hbase12:0:becomeActiveMaster] assignment.AssignmentManager(1538): Joined the cluster in 7 msec 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42925,1675825408564-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42925,1675825408564-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42925,1675825408564-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase12:42925, period=300000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-02-08 03:03:29,724 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-02-08 03:03:29,725 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(2138): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-02-08 03:03:29,727 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-02-08 03:03:29,728 DEBUG [master/jenkins-hbase12:0.Chore.1] janitor.CatalogJanitor(175): 2023-02-08 03:03:29,730 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-02-08 03:03:29,731 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-02-08 03:03:29,734 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.tmp/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:29,735 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.tmp/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723 empty. 2023-02-08 03:03:29,735 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.tmp/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:29,735 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-02-08 03:03:29,756 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-02-08 03:03:29,758 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7671): creating {ENCODED => 6e20c80d66984cfd318f8f02786c3723, NAME => 'hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/.tmp 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1603): Closing 6e20c80d66984cfd318f8f02786c3723, disabling compactions & flushes 2023-02-08 03:03:29,773 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. after waiting 0 ms 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:29,773 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1837): Closed hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:29,773 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1557): Region close journal for 6e20c80d66984cfd318f8f02786c3723: Waiting for close lock at 1675825409773Disabling compacts and flushes for region at 1675825409773Disabling writes for close at 1675825409773Writing region close event to WAL at 1675825409773Closed at 1675825409773 2023-02-08 03:03:29,776 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-02-08 03:03:29,778 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825409778"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1675825409778"}]},"ts":"1675825409778"} 2023-02-08 03:03:29,781 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-02-08 03:03:29,783 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-02-08 03:03:29,783 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1675825409783"}]},"ts":"1675825409783"} 2023-02-08 03:03:29,785 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-02-08 03:03:29,808 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase12.apache.org=0} racks are {/default-rack=0} 2023-02-08 03:03:29,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-02-08 03:03:29,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-02-08 03:03:29,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-02-08 03:03:29,809 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-02-08 03:03:29,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6e20c80d66984cfd318f8f02786c3723, ASSIGN}] 2023-02-08 03:03:29,813 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6e20c80d66984cfd318f8f02786c3723, ASSIGN 2023-02-08 03:03:29,815 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6e20c80d66984cfd318f8f02786c3723, ASSIGN; state=OFFLINE, location=jenkins-hbase12.apache.org,37951,1675825408831; forceNewPlan=false, retain=false 2023-02-08 03:03:29,857 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-08 03:03:29,858 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-02-08 03:03:29,965 INFO [jenkins-hbase12:42925] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-02-08 03:03:29,967 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6e20c80d66984cfd318f8f02786c3723, regionState=OPENING, regionLocation=jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:29,968 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825409967"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1675825409967"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1675825409967"}]},"ts":"1675825409967"} 2023-02-08 03:03:29,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6e20c80d66984cfd318f8f02786c3723, server=jenkins-hbase12.apache.org,37951,1675825408831}] 2023-02-08 03:03:30,128 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,128 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-02-08 03:03:30,130 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:51320, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-02-08 03:03:30,136 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(128): Open hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7850): Opening region: {ENCODED => 6e20c80d66984cfd318f8f02786c3723, NAME => 'hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.', STARTKEY => '', ENDKEY => ''} 2023-02-08 03:03:30,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(865): Instantiated hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-02-08 03:03:30,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7890): checking encryption for 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(7893): checking classloading for 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,140 INFO [StoreOpener-6e20c80d66984cfd318f8f02786c3723-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,143 DEBUG [StoreOpener-6e20c80d66984cfd318f8f02786c3723-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/info 2023-02-08 03:03:30,143 DEBUG [StoreOpener-6e20c80d66984cfd318f8f02786c3723-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/info 2023-02-08 03:03:30,145 INFO [StoreOpener-6e20c80d66984cfd318f8f02786c3723-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e20c80d66984cfd318f8f02786c3723 columnFamilyName info 2023-02-08 03:03:30,147 INFO [StoreOpener-6e20c80d66984cfd318f8f02786c3723-1] regionserver.HStore(310): Store=6e20c80d66984cfd318f8f02786c3723/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-02-08 03:03:30,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(5208): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,153 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1054): writing seq id for 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,156 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-02-08 03:03:30,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1071): Opened 6e20c80d66984cfd318f8f02786c3723; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=69847613, jitterRate=0.040810540318489075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-02-08 03:03:30,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(964): Region open journal for 6e20c80d66984cfd318f8f02786c3723: Running coprocessor pre-open hook at 1675825410137Writing region info on filesystem at 1675825410137Initializing all the Stores at 1675825410139 (+2 ms)Instantiating store for column family {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} at 1675825410139Cleaning up temporary data from old regions at 1675825410150 (+11 ms)Cleaning up detritus from prior splits at 1675825410151 (+1 ms)Running coprocessor post-open hooks at 1675825410157 (+6 ms)Region opened successfully at 1675825410157 2023-02-08 03:03:30,159 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2335): Post open deploy tasks for hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723., pid=6, masterSystemTime=1675825410128 2023-02-08 03:03:30,163 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionServer(2362): Finished post open deploy task for hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,164 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase12:0-0] handler.AssignRegionHandler(156): Opened hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,165 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6e20c80d66984cfd318f8f02786c3723, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1675825410165"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1675825410165"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1675825410165"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1675825410165"}]},"ts":"1675825410165"} 2023-02-08 03:03:30,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-02-08 03:03:30,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6e20c80d66984cfd318f8f02786c3723, server=jenkins-hbase12.apache.org,37951,1675825408831 in 197 msec 2023-02-08 03:03:30,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-02-08 03:03:30,176 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6e20c80d66984cfd318f8f02786c3723, ASSIGN in 363 msec 2023-02-08 03:03:30,178 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-02-08 03:03:30,178 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1675825410178"}]},"ts":"1675825410178"} 2023-02-08 03:03:30,179 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-02-08 03:03:30,314 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-02-08 03:03:30,317 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-02-08 03:03:30,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 593 msec 2023-02-08 03:03:30,327 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:30,327 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:30,330 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-08 03:03:30,331 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:51330, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-08 03:03:30,334 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-02-08 03:03:30,354 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:30,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 34 msec 2023-02-08 03:03:30,379 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-02-08 03:03:30,401 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-02-08 03:03:30,416 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 37 msec 2023-02-08 03:03:30,443 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-02-08 03:03:30,464 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-02-08 03:03:30,464 INFO [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1077): Master has completed initialization 1.573sec 2023-02-08 03:03:30,465 INFO [master/jenkins-hbase12:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-02-08 03:03:30,465 INFO [master/jenkins-hbase12:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-02-08 03:03:30,465 INFO [master/jenkins-hbase12:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-02-08 03:03:30,465 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42925,1675825408564-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-02-08 03:03:30,465 INFO [master/jenkins-hbase12:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase12.apache.org,42925,1675825408564-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-02-08 03:03:30,470 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster] master.HMaster(1166): Balancer post startup initialization complete, took 0 seconds 2023-02-08 03:03:30,558 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ReadOnlyZKClient(139): Connect 0x5c56d096 to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:30,578 DEBUG [Listener at localhost.localdomain/37527] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48021b4d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:30,584 DEBUG [hconnection-0x2808fa91-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-02-08 03:03:30,589 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:43866, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-02-08 03:03:30,591 INFO [Listener at localhost.localdomain/37527] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:30,592 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ReadOnlyZKClient(139): Connect 0x30a82d2f to 127.0.0.1:58596 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-02-08 03:03:30,608 DEBUG [ReadOnlyZKClient-127.0.0.1:58596@0x30a82d2f] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@566c639d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-02-08 03:03:30,610 DEBUG [Listener at localhost.localdomain/37527] client.ConnectionUtils(586): Start fetching master stub from registry 2023-02-08 03:03:30,611 DEBUG [ReadOnlyZKClient-127.0.0.1:58596@0x30a82d2f] client.AsyncConnectionImpl(289): The fetched master address is jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:30,611 DEBUG [ReadOnlyZKClient-127.0.0.1:58596@0x30a82d2f] client.ConnectionUtils(594): The fetched master stub is org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$Stub@5253fd88 2023-02-08 03:03:30,616 DEBUG [ReadOnlyZKClient-127.0.0.1:58596@0x30a82d2f] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-02-08 03:03:30,619 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 136.243.104.168:53484, version=2.4.17-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-02-08 03:03:30,619 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42925] master.MasterRpcServices(1560): Client=jenkins//136.243.104.168 shutdown 2023-02-08 03:03:30,619 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42925] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:30,633 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:30,633 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42925] procedure2.ProcedureExecutor(629): Stopping 2023-02-08 03:03:30,633 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:30,633 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:30,633 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-02-08 03:03:30,634 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:30,634 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42925] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39df3ffe to 127.0.0.1:58596 2023-02-08 03:03:30,634 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:30,634 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42925] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,634 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:30,634 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:30,634 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-02-08 03:03:30,722 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,43487,1675825408760' ***** 2023-02-08 03:03:30,722 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(2310): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-02-08 03:03:30,725 INFO [RS:0;jenkins-hbase12:43487] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:30,726 INFO [RS:0;jenkins-hbase12:43487] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:30,726 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:30,726 INFO [RS:0;jenkins-hbase12:43487] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:30,728 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:30,728 DEBUG [RS:0;jenkins-hbase12:43487] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x73736a8b to 127.0.0.1:58596 2023-02-08 03:03:30,728 DEBUG [RS:0;jenkins-hbase12:43487] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,729 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,43487,1675825408760; all regions closed. 2023-02-08 03:03:30,733 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1065): Closing user regions 2023-02-08 03:03:30,733 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1065): Closing user regions 2023-02-08 03:03:30,733 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(3304): Received CLOSE for 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,735 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:30,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 6e20c80d66984cfd318f8f02786c3723, disabling compactions & flushes 2023-02-08 03:03:30,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. after waiting 0 ms 2023-02-08 03:03:30,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 6e20c80d66984cfd318f8f02786c3723 1/1 column families, dataSize=78 B heapSize=488 B 2023-02-08 03:03:30,740 DEBUG [RS:0;jenkins-hbase12:43487] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs 2023-02-08 03:03:30,740 INFO [RS:0;jenkins-hbase12:43487] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C43487%2C1675825408760:(num 1675825409561) 2023-02-08 03:03:30,740 DEBUG [RS:0;jenkins-hbase12:43487] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,740 INFO [RS:0;jenkins-hbase12:43487] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:30,740 INFO [RS:0;jenkins-hbase12:43487] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:30,740 INFO [RS:0;jenkins-hbase12:43487] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:30,740 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:30,740 INFO [RS:0;jenkins-hbase12:43487] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:30,741 INFO [RS:0;jenkins-hbase12:43487] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:30,742 INFO [RS:0;jenkins-hbase12:43487] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:43487 2023-02-08 03:03:30,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/.tmp/info/f7180d8929b8417489e4841c799e4e09 2023-02-08 03:03:30,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/.tmp/info/f7180d8929b8417489e4841c799e4e09 as hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/info/f7180d8929b8417489e4841c799e4e09 2023-02-08 03:03:30,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/info/f7180d8929b8417489e4841c799e4e09, entries=2, sequenceid=6, filesize=4.8 K 2023-02-08 03:03:30,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6e20c80d66984cfd318f8f02786c3723 in 41ms, sequenceid=6, compaction requested=false 2023-02-08 03:03:30,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/namespace/6e20c80d66984cfd318f8f02786c3723/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-02-08 03:03:30,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 6e20c80d66984cfd318f8f02786c3723: Waiting for close lock at 1675825410734Running coprocessor pre-close hooks at 1675825410734Disabling compacts and flushes for region at 1675825410734Disabling writes for close at 1675825410736 (+2 ms)Obtaining lock to block concurrent updates at 1675825410736Preparing flush snapshotting stores in 6e20c80d66984cfd318f8f02786c3723 at 1675825410736Finished memstore snapshotting hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723., syncing WAL and waiting on mvcc, flushsize=dataSize=78, getHeapSize=472, getOffHeapSize=0, getCellsCount=2 at 1675825410736Flushing stores of hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. at 1675825410738 (+2 ms)Flushing 6e20c80d66984cfd318f8f02786c3723/info: creating writer at 1675825410738Flushing 6e20c80d66984cfd318f8f02786c3723/info: appending metadata at 1675825410746 (+8 ms)Flushing 6e20c80d66984cfd318f8f02786c3723/info: closing flushed file at 1675825410746Flushing 6e20c80d66984cfd318f8f02786c3723/info: reopening flushed file at 1675825410769 (+23 ms)Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6e20c80d66984cfd318f8f02786c3723 in 41ms, sequenceid=6, compaction requested=false at 1675825410777 (+8 ms)Writing region close event to WAL at 1675825410779 (+2 ms)Running coprocessor post-close hooks at 1675825410785 (+6 ms)Closed at 1675825410785 2023-02-08 03:03:30,786 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42925] assignment.AssignmentManager(1094): RegionServer CLOSED 6e20c80d66984cfd318f8f02786c3723 2023-02-08 03:03:30,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1675825409724.6e20c80d66984cfd318f8f02786c3723. 2023-02-08 03:03:30,812 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:30,812 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:30,812 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,812 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 2023-02-08 03:03:30,812 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,812 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@4b2dbbc rejected from java.util.concurrent.ThreadPoolExecutor@8d5ff2[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,814 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,815 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,815 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@667d2c90 rejected from java.util.concurrent.ThreadPoolExecutor@8d5ff2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,837 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1084): Waiting on 1588230740 2023-02-08 03:03:30,838 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,37951,1675825408831' ***** 2023-02-08 03:03:30,838 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(2310): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-02-08 03:03:30,838 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:30,840 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,43487,1675825408760] 2023-02-08 03:03:30,840 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,43487,1675825408760; numProcessing=1 2023-02-08 03:03:30,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:30,840 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:30,844 INFO [RS:2;jenkins-hbase12:37951] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:30,844 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:30,844 INFO [RS:2;jenkins-hbase12:37951] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:30,845 INFO [RS:2;jenkins-hbase12:37951] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:30,847 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,847 DEBUG [RS:2;jenkins-hbase12:37951] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5cb2011f to 127.0.0.1:58596 2023-02-08 03:03:30,847 DEBUG [RS:2;jenkins-hbase12:37951] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,847 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,37951,1675825408831; all regions closed. 2023-02-08 03:03:30,852 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/WALs/jenkins-hbase12.apache.org,37951,1675825408831/jenkins-hbase12.apache.org%2C37951%2C1675825408831.1675825409580 not finished, retry = 0 2023-02-08 03:03:30,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,854 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,43487,1675825408760 already deleted, retry=false 2023-02-08 03:03:30,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,854 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,43487,1675825408760 expired; onlineServers=2 2023-02-08 03:03:30,855 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 znode expired, triggering replicatorRemoved event 2023-02-08 03:03:30,855 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase12.apache.org,43487,1675825408760 znode expired, triggering replicatorRemoved event 2023-02-08 03:03:30,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:30,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,940 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,41267,1675825408796' ***** 2023-02-08 03:03:30,941 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(2310): STOPPED: Stopped; only catalog regions remaining online 2023-02-08 03:03:30,941 INFO [RS:1;jenkins-hbase12:41267] regionserver.HeapMemoryManager(220): Stopping 2023-02-08 03:03:30,941 INFO [RS:1;jenkins-hbase12:41267] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-02-08 03:03:30,941 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-02-08 03:03:30,941 INFO [RS:1;jenkins-hbase12:41267] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:30,942 DEBUG [RS:1;jenkins-hbase12:41267] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f55c12b to 127.0.0.1:58596 2023-02-08 03:03:30,942 DEBUG [RS:1;jenkins-hbase12:41267] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(3304): Received CLOSE for 1588230740 2023-02-08 03:03:30,942 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1475): Waiting on 1 regions to close 2023-02-08 03:03:30,943 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1479): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-02-08 03:03:30,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1603): Closing 1588230740, disabling compactions & flushes 2023-02-08 03:03:30,943 DEBUG [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1505): Waiting on 1588230740 2023-02-08 03:03:30,943 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1625): Closing region hbase:meta,,1.1588230740 2023-02-08 03:03:30,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1646): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-02-08 03:03:30,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1713): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-02-08 03:03:30,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1723): Updates disabled for region hbase:meta,,1.1588230740 2023-02-08 03:03:30,944 INFO [regionserver/jenkins-hbase12:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:30,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2744): Flushing 1588230740 3/3 column families, dataSize=1.26 KB heapSize=2.89 KB 2023-02-08 03:03:30,946 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:30,946 INFO [RS:0;jenkins-hbase12:43487] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,43487,1675825408760; zookeeper connection closed. 2023-02-08 03:03:30,946 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@1e98d775 rejected from java.util.concurrent.ThreadPoolExecutor@8d5ff2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,946 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@10b33828] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@10b33828 2023-02-08 03:03:30,946 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:43487-0x101408635840001, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:30,947 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2a368f40 rejected from java.util.concurrent.ThreadPoolExecutor@8d5ff2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 4] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,957 DEBUG [RS:2;jenkins-hbase12:37951] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs 2023-02-08 03:03:30,957 INFO [RS:2;jenkins-hbase12:37951] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C37951%2C1675825408831:(num 1675825409580) 2023-02-08 03:03:30,957 DEBUG [RS:2;jenkins-hbase12:37951] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:30,957 INFO [RS:2;jenkins-hbase12:37951] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:30,957 INFO [RS:2;jenkins-hbase12:37951] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:30,957 INFO [RS:2;jenkins-hbase12:37951] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-02-08 03:03:30,958 INFO [RS:2;jenkins-hbase12:37951] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-02-08 03:03:30,958 INFO [RS:2;jenkins-hbase12:37951] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-02-08 03:03:30,957 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:30,959 INFO [RS:2;jenkins-hbase12:37951] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:37951 2023-02-08 03:03:30,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.17 KB at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/.tmp/info/61bd25121fe14bd68bf69889d99c2ddc 2023-02-08 03:03:30,969 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,970 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,969 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,37951,1675825408831 2023-02-08 03:03:30,970 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,970 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@11ed2495 rejected from java.util.concurrent.ThreadPoolExecutor@15d2b270[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,970 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:30,970 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@197b0f6d rejected from java.util.concurrent.ThreadPoolExecutor@15d2b270[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:30,980 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,37951,1675825408831] 2023-02-08 03:03:30,980 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,37951,1675825408831; numProcessing=2 2023-02-08 03:03:30,985 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=94 B at sequenceid=9 (bloomFilter=false), to=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/.tmp/table/3dc7416891924f5eaffdcb161a4598a4 2023-02-08 03:03:30,990 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,37951,1675825408831 already deleted, retry=false 2023-02-08 03:03:30,991 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,37951,1675825408831 expired; onlineServers=1 2023-02-08 03:03:30,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/.tmp/info/61bd25121fe14bd68bf69889d99c2ddc as hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/info/61bd25121fe14bd68bf69889d99c2ddc 2023-02-08 03:03:31,000 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/info/61bd25121fe14bd68bf69889d99c2ddc, entries=10, sequenceid=9, filesize=5.9 K 2023-02-08 03:03:31,002 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/.tmp/table/3dc7416891924f5eaffdcb161a4598a4 as hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/table/3dc7416891924f5eaffdcb161a4598a4 2023-02-08 03:03:31,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/table/3dc7416891924f5eaffdcb161a4598a4, entries=2, sequenceid=9, filesize=4.7 K 2023-02-08 03:03:31,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(2947): Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 67ms, sequenceid=9, compaction requested=false 2023-02-08 03:03:31,017 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/data/hbase/meta/1588230740/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-02-08 03:03:31,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-02-08 03:03:31,018 INFO [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1837): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:31,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] regionserver.HRegion(1557): Region close journal for 1588230740: Waiting for close lock at 1675825410943Running coprocessor pre-close hooks at 1675825410943Disabling compacts and flushes for region at 1675825410943Disabling writes for close at 1675825410944 (+1 ms)Obtaining lock to block concurrent updates at 1675825410944Preparing flush snapshotting stores in 1588230740 at 1675825410944Finished memstore snapshotting hbase:meta,,1.1588230740, syncing WAL and waiting on mvcc, flushsize=dataSize=1292, getHeapSize=2912, getOffHeapSize=0, getCellsCount=12 at 1675825410945 (+1 ms)Flushing stores of hbase:meta,,1.1588230740 at 1675825410946 (+1 ms)Flushing 1588230740/info: creating writer at 1675825410946Flushing 1588230740/info: appending metadata at 1675825410951 (+5 ms)Flushing 1588230740/info: closing flushed file at 1675825410951Flushing 1588230740/table: creating writer at 1675825410972 (+21 ms)Flushing 1588230740/table: appending metadata at 1675825410975 (+3 ms)Flushing 1588230740/table: closing flushed file at 1675825410975Flushing 1588230740/info: reopening flushed file at 1675825410994 (+19 ms)Flushing 1588230740/table: reopening flushed file at 1675825411002 (+8 ms)Finished flush of dataSize ~1.26 KB/1292, heapSize ~2.61 KB/2672, currentSize=0 B/0 for 1588230740 in 67ms, sequenceid=9, compaction requested=false at 1675825411011 (+9 ms)Writing region close event to WAL at 1675825411014 (+3 ms)Running coprocessor post-close hooks at 1675825411018 (+4 ms)Closed at 1675825411018 2023-02-08 03:03:31,018 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase12:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-02-08 03:03:31,143 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,41267,1675825408796; all regions closed. 2023-02-08 03:03:31,147 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,147 INFO [RS:2;jenkins-hbase12:37951] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,37951,1675825408831; zookeeper connection closed. 2023-02-08 03:03:31,147 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@610b2630 rejected from java.util.concurrent.ThreadPoolExecutor@15d2b270[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,150 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@33b9e401] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@33b9e401 2023-02-08 03:03:31,151 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:37951-0x101408635840003, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,152 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@71beb0f4 rejected from java.util.concurrent.ThreadPoolExecutor@15d2b270[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,159 DEBUG [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs 2023-02-08 03:03:31,159 INFO [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C41267%2C1675825408796.meta:.meta(num 1675825409622) 2023-02-08 03:03:31,165 DEBUG [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(932): Moved 1 WAL file(s) to /user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/oldWALs 2023-02-08 03:03:31,165 INFO [RS:1;jenkins-hbase12:41267] wal.AbstractFSWAL(935): Closed WAL: AsyncFSWAL jenkins-hbase12.apache.org%2C41267%2C1675825408796:(num 1675825409579) 2023-02-08 03:03:31,166 DEBUG [RS:1;jenkins-hbase12:41267] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:31,166 INFO [RS:1;jenkins-hbase12:41267] regionserver.LeaseManager(133): Closed leases 2023-02-08 03:03:31,166 INFO [RS:1;jenkins-hbase12:41267] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase12:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-02-08 03:03:31,166 INFO [regionserver/jenkins-hbase12:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:31,167 INFO [RS:1;jenkins-hbase12:41267] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:41267 2023-02-08 03:03:31,180 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-02-08 03:03:31,180 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase12.apache.org,41267,1675825408796 2023-02-08 03:03:31,180 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6c8e1a8c rejected from java.util.concurrent.ThreadPoolExecutor@68b183c5[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,191 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase12.apache.org,41267,1675825408796] 2023-02-08 03:03:31,191 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase12.apache.org,41267,1675825408796; numProcessing=3 2023-02-08 03:03:31,201 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase12.apache.org,41267,1675825408796 already deleted, retry=false 2023-02-08 03:03:31,201 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase12.apache.org,41267,1675825408796 expired; onlineServers=0 2023-02-08 03:03:31,201 INFO [RegionServerTracker-0] regionserver.HRegionServer(2296): ***** STOPPING region server 'jenkins-hbase12.apache.org,42925,1675825408564' ***** 2023-02-08 03:03:31,201 INFO [RegionServerTracker-0] regionserver.HRegionServer(2310): STOPPED: Cluster shutdown set; onlineServer=0 2023-02-08 03:03:31,202 DEBUG [M:0;jenkins-hbase12:42925] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6e3c8344, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase12.apache.org/136.243.104.168:0 2023-02-08 03:03:31,203 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegionServer(1145): stopping server jenkins-hbase12.apache.org,42925,1675825408564 2023-02-08 03:03:31,203 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegionServer(1171): stopping server jenkins-hbase12.apache.org,42925,1675825408564; all regions closed. 2023-02-08 03:03:31,203 DEBUG [M:0;jenkins-hbase12:42925] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:31,203 DEBUG [M:0;jenkins-hbase12:42925] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-02-08 03:03:31,203 DEBUG [M:0;jenkins-hbase12:42925] cleaner.HFileCleaner(317): Stopping file delete threads 2023-02-08 03:03:31,203 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825409207] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.large.0-1675825409207,5,FailOnTimeoutGroup] 2023-02-08 03:03:31,203 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-02-08 03:03:31,203 DEBUG [master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825409208] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase12:0:becomeActiveMaster-HFileCleaner.small.0-1675825409208,5,FailOnTimeoutGroup] 2023-02-08 03:03:31,204 INFO [M:0;jenkins-hbase12:42925] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-02-08 03:03:31,206 INFO [M:0;jenkins-hbase12:42925] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-02-08 03:03:31,206 INFO [M:0;jenkins-hbase12:42925] hbase.ChoreService(369): Chore service for: master/jenkins-hbase12:0 had [] on shutdown 2023-02-08 03:03:31,207 DEBUG [M:0;jenkins-hbase12:42925] master.HMaster(1502): Stopping service threads 2023-02-08 03:03:31,207 INFO [M:0;jenkins-hbase12:42925] procedure2.RemoteProcedureDispatcher(118): Stopping procedure remote dispatcher 2023-02-08 03:03:31,207 ERROR [M:0;jenkins-hbase12:42925] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-2,5,PEWorkerGroup] 2023-02-08 03:03:31,208 INFO [M:0;jenkins-hbase12:42925] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-02-08 03:03:31,209 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-02-08 03:03:31,217 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-02-08 03:03:31,217 DEBUG [M:0;jenkins-hbase12:42925] zookeeper.ZKUtil(398): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-02-08 03:03:31,217 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-02-08 03:03:31,218 WARN [M:0;jenkins-hbase12:42925] master.ActiveMasterManager(323): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-02-08 03:03:31,218 INFO [M:0;jenkins-hbase12:42925] assignment.AssignmentManager(315): Stopping assignment manager 2023-02-08 03:03:31,218 INFO [M:0;jenkins-hbase12:42925] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-02-08 03:03:31,218 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-02-08 03:03:31,219 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegion(1603): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-02-08 03:03:31,219 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegion(1625): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:31,219 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegion(1646): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:31,219 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegion(1713): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-02-08 03:03:31,219 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegion(1723): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:31,219 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegion(2744): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=24.09 KB heapSize=29.59 KB 2023-02-08 03:03:31,228 WARN [IPC Server handler 3 on default port 36579] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-02-08 03:03:31,228 WARN [IPC Server handler 3 on default port 36579] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-02-08 03:03:31,228 WARN [IPC Server handler 3 on default port 36579] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-02-08 03:03:31,235 INFO [M:0;jenkins-hbase12:42925] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=24.09 KB at sequenceid=66 (bloomFilter=true), to=hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/261a6d0ceb464a6b83e688c492a49dda 2023-02-08 03:03:31,237 INFO [Listener at localhost.localdomain/37527] client.AsyncConnectionImpl(207): Connection has been closed by Listener at localhost.localdomain/37527. 2023-02-08 03:03:31,237 DEBUG [Listener at localhost.localdomain/37527] client.AsyncConnectionImpl(232): Call stack: at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.client.AsyncConnectionImpl.close(AsyncConnectionImpl.java:209) at org.apache.hbase.thirdparty.com.google.common.io.Closeables.close(Closeables.java:79) at org.apache.hadoop.hbase.client.TestAsyncClusterAdminApi2.tearDown(TestAsyncClusterAdminApi2.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-02-08 03:03:31,237 DEBUG [Listener at localhost.localdomain/37527] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:31,238 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x30a82d2f to 127.0.0.1:58596 2023-02-08 03:03:31,238 INFO [Listener at localhost.localdomain/37527] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-02-08 03:03:31,238 DEBUG [Listener at localhost.localdomain/37527] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c56d096 to 127.0.0.1:58596 2023-02-08 03:03:31,238 DEBUG [Listener at localhost.localdomain/37527] ipc.AbstractRpcClient(495): Stopping rpc client 2023-02-08 03:03:31,239 DEBUG [Listener at localhost.localdomain/37527] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-02-08 03:03:31,244 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/261a6d0ceb464a6b83e688c492a49dda as hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/261a6d0ceb464a6b83e688c492a49dda 2023-02-08 03:03:31,250 INFO [M:0;jenkins-hbase12:42925] regionserver.HStore(1080): Added hdfs://localhost.localdomain:36579/user/jenkins/test-data/d7f0119b-5bc8-8595-2a26-0ef445ef2257/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/261a6d0ceb464a6b83e688c492a49dda, entries=8, sequenceid=66, filesize=6.3 K 2023-02-08 03:03:31,251 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegion(2947): Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=66, compaction requested=false 2023-02-08 03:03:31,252 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegion(1837): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-02-08 03:03:31,252 DEBUG [M:0;jenkins-hbase12:42925] regionserver.HRegion(1557): Region close journal for 1595e783b53d99cd5eef43b6debb2682: Waiting for close lock at 1675825411218Disabling compacts and flushes for region at 1675825411218Disabling writes for close at 1675825411219 (+1 ms)Obtaining lock to block concurrent updates at 1675825411219Preparing flush snapshotting stores in 1595e783b53d99cd5eef43b6debb2682 at 1675825411219Finished memstore snapshotting master:store,,1.1595e783b53d99cd5eef43b6debb2682., syncing WAL and waiting on mvcc, flushsize=dataSize=24669, getHeapSize=30280, getOffHeapSize=0, getCellsCount=71 at 1675825411219Flushing stores of master:store,,1.1595e783b53d99cd5eef43b6debb2682. at 1675825411220 (+1 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: creating writer at 1675825411221 (+1 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: appending metadata at 1675825411225 (+4 ms)Flushing 1595e783b53d99cd5eef43b6debb2682/proc: closing flushed file at 1675825411225Flushing 1595e783b53d99cd5eef43b6debb2682/proc: reopening flushed file at 1675825411244 (+19 ms)Finished flush of dataSize ~24.09 KB/24669, heapSize ~29.57 KB/30280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=66, compaction requested=false at 1675825411251 (+7 ms)Writing region close event to WAL at 1675825411252 (+1 ms)Closed at 1675825411252 2023-02-08 03:03:31,256 INFO [M:0;jenkins-hbase12:42925] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-02-08 03:03:31,256 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-02-08 03:03:31,256 INFO [M:0;jenkins-hbase12:42925] ipc.NettyRpcServer(158): Stopping server on /136.243.104.168:42925 2023-02-08 03:03:31,270 DEBUG [M:0;jenkins-hbase12:42925] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase12.apache.org,42925,1675825408564 already deleted, retry=false 2023-02-08 03:03:31,347 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,347 INFO [RS:1;jenkins-hbase12:41267] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,41267,1675825408796; zookeeper connection closed. 2023-02-08 03:03:31,348 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@9249f4 rejected from java.util.concurrent.ThreadPoolExecutor@68b183c5[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,348 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): regionserver:41267-0x101408635840002, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,348 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6234a88e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6234a88e 2023-02-08 03:03:31,348 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@789f7ef4 rejected from java.util.concurrent.ThreadPoolExecutor@68b183c5[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,349 INFO [Listener at localhost.localdomain/37527] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-02-08 03:03:31,448 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,448 INFO [M:0;jenkins-hbase12:42925] regionserver.HRegionServer(1228): Exiting; stopping=jenkins-hbase12.apache.org,42925,1675825408564; zookeeper connection closed. 2023-02-08 03:03:31,448 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@435253e1 rejected from java.util.concurrent.ThreadPoolExecutor@476dcf53[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 28] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,449 DEBUG [Listener at localhost.localdomain/37527-EventThread] zookeeper.ZKWatcher(600): master:42925-0x101408635840000, quorum=127.0.0.1:58596, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-02-08 03:03:31,449 ERROR [Listener at localhost.localdomain/37527-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@623b4108 rejected from java.util.concurrent.ThreadPoolExecutor@476dcf53[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 28] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602) at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-02-08 03:03:31,449 WARN [Listener at localhost.localdomain/37527] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:31,455 INFO [Listener at localhost.localdomain/37527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:31,477 WARN [BP-1437528650-136.243.104.168-1675825405707 heartbeating to localhost.localdomain/127.0.0.1:36579] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1437528650-136.243.104.168-1675825405707 (Datanode Uuid 4e4ad012-0018-4116-9af1-a1aab5ae3b4a) service to localhost.localdomain/127.0.0.1:36579 2023-02-08 03:03:31,478 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data5/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,478 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data6/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,562 WARN [Listener at localhost.localdomain/37527] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:31,566 INFO [Listener at localhost.localdomain/37527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:31,673 WARN [BP-1437528650-136.243.104.168-1675825405707 heartbeating to localhost.localdomain/127.0.0.1:36579] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-08 03:03:31,673 WARN [BP-1437528650-136.243.104.168-1675825405707 heartbeating to localhost.localdomain/127.0.0.1:36579] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1437528650-136.243.104.168-1675825405707 (Datanode Uuid 11132927-5840-4328-a9b7-ccc80d0d7777) service to localhost.localdomain/127.0.0.1:36579 2023-02-08 03:03:31,675 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data3/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,676 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data4/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,679 WARN [Listener at localhost.localdomain/37527] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-02-08 03:03:31,683 INFO [Listener at localhost.localdomain/37527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-02-08 03:03:31,793 WARN [BP-1437528650-136.243.104.168-1675825405707 heartbeating to localhost.localdomain/127.0.0.1:36579] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-02-08 03:03:31,793 WARN [BP-1437528650-136.243.104.168-1675825405707 heartbeating to localhost.localdomain/127.0.0.1:36579] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1437528650-136.243.104.168-1675825405707 (Datanode Uuid d9d7d489-11aa-4b25-8e2a-2ac51e0c1dcb) service to localhost.localdomain/127.0.0.1:36579 2023-02-08 03:03:31,795 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data1/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,795 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/a3a665ef-2604-c379-8be9-d2ec08ecba91/cluster_34022070-b6bf-7246-b76d-764a7f93b630/dfs/data/data2/current/BP-1437528650-136.243.104.168-1675825405707] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-02-08 03:03:31,810 INFO [Listener at localhost.localdomain/37527] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-02-08 03:03:31,928 INFO [Listener at localhost.localdomain/37527] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-02-08 03:03:31,954 INFO [Listener at localhost.localdomain/37527] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-02-08 03:03:31,964 INFO [Listener at localhost.localdomain/37527] hbase.ResourceChecker(175): after: client.TestAsyncClusterAdminApi2#testShutdown Thread=108 (was 78) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:36579 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-10-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReplicationExecutor-0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:703) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-12-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReplicationExecutor-0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:703) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:36579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37527 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.apache.hadoop.hbase.SystemExitRule$1.evaluate(SystemExitRule.java:39) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-13-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:36579 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:36579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:36579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:36579 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:36579 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-9-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:36579 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-12-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-10-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-10-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-3 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-11-2 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-8-1 java.lang.Thread.sleep(Native Method) io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:790) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:525) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1724464517) connection to localhost.localdomain/127.0.0.1:36579 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=541 (was 501) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=392 (was 383) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 172), AvailableMemoryMB=2939 (was 2996)